Chapter 9
Estimation Models

Estimation Process

Estimation is a process that uses prediction systems and intuition for cost and resource planning. Estimation is controlled by “cost realism,” which does not always insist on exactness but lays equal emphasis on logic as much on the mathematical form of the prediction system. It is concerned about assumptions regarding the future and the relevance of history. It is concerned about bias in prediction.

On one hand, estimation models use rigorous statistics for generating the prediction equation. On the other hand, common sense rules several choices and assumptions made en route.

Estimation is as much art as science.

There are useful techniques available for time and effort estimation. Process and project metrics can provide historical perspective and powerful input for the generation of quantitative estimates. Past experience of all people involved can provide immeasurably as estimates are developed and reviewed. Estimation lays a foundation for all other project planning activities and project planning provides the road map for successful execution of the project.

Size, effort, schedule, and the cost of the project are estimated in parallel with the requirements definition phase. As requirements get clearer and refined, the estimates also get refined in parallel. Size estimation involves predicting “how big” the software will be. This is done by counting the number of features, functions, lines of code, or objects and applying appropriate weights to arrive at one or two numbers that represent the size of the software. Based on the size of the software, productivity-related data, and experience from past projects; the size is converted into effort. Effort is usually estimated in terms of person-hours, person-days, or person-months that need to be consumed to create the software. Schedule is derived from the effort estimate, the number of team members, and the extent to which project life cycle activities are independent of each other. Estimated costs are calculated based on the effort that needs to be put in and other elements of cost such as travel, hardware, software, infrastructure, training specific to the project, and expected usage of communication facilities. Though estimation is an intense activity during and at the end of the requirements stage, tracking of estimates and reestimation continues throughout the project at a reduced intensity.


Exhibit 1. Why do we estimate size, cost, and schedule?

  • To scope proposed tasks
  • To explore alternative system concepts
  • To design to cost/budget
  • To explore alternative design concepts
  • To explore alternative proposals for enhancements and upgrades
  • To identify key design elements
  • To identify key process parameters
  • To prioritize needs vs. wants
  • To identify key assumptions
  • To identify and quantify uncertainties
  • To identify tasks and their relationships
  • To assess schedule feasibility
  • To identify, allocate, and schedule resources
  • To assess an organization’s ability to perform within targeted costs
  • To evaluate the consequences of internal and external constraints
  • To establish achievable objectives
  • To establish a basis for quality service
  • To establish commitments
  • To bound the risk against customer needs
  • To balance levels of risk against customer needs
  • To provide a basis of successful risk management
  • To do build vs. buy analysis
  • To prepare successful proposals
  • To evaluate proposals from competing bidders
  • To establish baselines for project tracking
  • To do enhance/reuse vs. redesign analysis
  • To predict life cycle costs
  • To predict returns on investments
  • To provide information for establishing business and investment strategies

The mathematical side of estimation requires a higher degree of precision and dependability than general regression needs. To win a place in the estimation model, the prediction equations are built using validated data, more precise measurements, and even experiments (Exhibit 1 through Exhibit 3).


Software Estimation Risks

The effects of inaccurate software estimation and schedule overruns are well known. Software estimation errors generally result from four major risk areas:

  1. The inability to accurately size the software project
  2. The inability to accurately specify a development environment that reflects reality
  3. The improper assessment of staff skills
  4. The lack of well-defined objectives, requirements, and specifications during the software development life cycle

Exhibit 2. Elements of good estimating practice.

  • Written objectives
  • Product description
  • Task identification
  • Involvement of different project people into the estimating process
  • Use of more than one cost model or estimating approach
  • Estimating potential cost and schedules impacts for all identified tasks
  • Identification and quantification of uncertainties in descriptive parameters values
  • Estimates updated with changes
  • Method for organizing and retaining information on completed projects
  • Analyze dictated schedules for impacts on cost

Exhibit 3. Estimating capability indicators.

  • Management acknowledges its indicators of estimating capability
  • Estimators equipped with the tools and training needed for reliable estimating
  • Experienced and capable people assigned as estimators
  • Quantify, track, and evaluate estimating capability of the organization

Estimation Methodologies

Analogy Method

Estimating by analogy means comparing the proposed project to previously completed similar projects where project development information is known. Actual data from the completed projects are extrapolated to estimate the proposed project. Estimating by analogy can be done either at the system level or the component level.


Bottom-Up Method

Bottom-up estimation involves identifying and estimating each individual component separately, then combining the results to produce an estimate of the entire project.


Top-Down Method

The top-down method of estimation is based on overall characteristics of the software project. This method is more applicable to early cost estimates when only global properties are known. The focus is on system-level activities such as integration, documentation, project control, configuration management, etc. The top-down method is faster, easier to implement, and requires minimal project detail.


Expert Judgment Method

Expert judgment involves consulting with human experts to use their experience and understanding of a proposed project to provide an estimate for the cost of the project.


Two Variables Algorithmic Method (Parametric Method)

The algorithmic or parametric method involves the use of equations to perform software estimates. The equations are based on research and historical data and use one input as source lines of code (SLOC) or number of functions to perform to predict cost. The limitation of these models is that they are two-dimensional snapshots of reality (which has several dimensions).


Multiple Variables Algorithmic Method

These models employ several parameters or factors as cost drivers. The estimation process considers the simultaneous influence of all these factors, and hence are considered more realistic and dependable.


Thumb Rules

In first-order estimations, we use our personal rules of thumb. These rules come from experience. The danger is that these are subjective. The strength is that before final acceptance each of the estimations requires a sanity check from the rules of thumb. If there is gross difference between the estimate and the rules of thumb, we need to reconsider the estimate and evaluate the assumptions that have been made. Rules of thumb are important; they provide rough order of magnitude (ROM) estimates.


Delphi Estimate

The wideband Delphi technique is a structured way of estimating based on collective expertise. It is used for first-cut estimation in situations where the expertise in estimating is particularly valuable. It is also used to complement other estimation techniques. In the context of software sizing, the wideband Delphi technique can be used to arrive at the LOC estimates for the proposed system.

It is a group forecasting technique, generally used for future events such as technological developments, that uses estimates from experts and feedback summaries of these estimates for additional estimates by these experts until a reasonable consensus occurs. Wideband Delphi technique is based on recognition of the fact that when many experts independently

come to the same estimate based on the same assumptions, the estimate is likely to be correct. Of course, the experts involved in the estimation should be working on the same and correct assumptions. It has been used in various software cost-estimating activities, including estimation of factors influencing software costs.


Golden Rule

Without the collective knowledge of a group and its multifaceted analysis of the situation, we can still achieve dependable results by using the golden rule of estimation. The rule suggests a process of inquiry, either in the mind or using a PERT chart, to study the optimistic, pessimistic, and most-likely values, and then take a pragmatic view that combines all the three estimates using Equation 9.1. There is logic behind this calculation, which is based on the illustration.


where:
t o = optimistic value
t p = pessimistic value
t m = most-likely value

The golden rule estimate takes into account the entire range of possible variation, based on the experience of the estimator. Specifically, it removes bias from the estimate, and hence gives a safe and more dependable basis for project planning.


Prediction Capability

A critical step in software project management is estimation. Essentially, estimation is a predictive exercise. We aspire to build “prediction capabilities” in projects to strengthen the planning and management systems. Projects begin with size estimation based on requirements analysis; for budgeting and resource planning, we try to predict cost, schedule, and defects from size. These are the most visible and widely discussed prediction applications at the business level.

At the micro level, prediction is used as a decision tool in numerous areas: to set process goals, fix threshold levels for decision making, and define control limits. In mature organizations, at the end of every phase the next phase process parameters are predicted. This prediction is seen as a refinement over the baseline predictions because fresh data has come in from the completed phase to improve the prediction.

To predict an expected value in the work center is now recognized as the beginning of process innovation; almost all improvement initiatives are anchored to this moment of prediction.

Prediction capability is also regarded as the ultimate contribution from a metrics system.

For prediction, the dependent variables (responses) and the independent variables (predictors) must be selected and defined. The associated data must be gathered; from these empirical data statistical prediction models can be built. While the most popular prediction models are based on regression and probabilistic models, time series models have also been used for prediction.


Prediction Equations

At the heart of a prediction is a prediction equation that translates project experience into a mathematical form. Attributes of experience are transferred to the legacy equations; limited experiences produce equations with limited potential. The broader the experience, the broader the application range of the prediction equations.

From regression analysis of metrics data that capture experiences from project clusters, we can build useful prediction equations. Besides the experience-based limitations, the following data-dependent restrictions apply to these prediction equations:

  • Inconsistency of data
  • Errors in data
  • Inadequate sampling
  • Misrepresentations
  • Bias

The regression equations cannot easily be applied to ranges beyond that of parent data. Even within the range, the equations may operate at undesirably low confidence levels.

Despite all these limitations, these prediction equations can be called “prediction models” and can be used in the decision-making process.

A collection of prediction models from the empirical analysis illustrated in Chapter 7 is presented in Exhibit 4. The table contains the equations of the prediction models and their confidence levels. Each equation represents some practical experience seen from a metrics window. The source data consists of natural observations picked from log sheets and metrics databases, not from special data collection exercises or specially designed experiments.


Estimation Algorithms

Since the 1950s, attempts have been made to arrive at estimation algorithms or cost estimation relationships (CERs) for project budgeting. The problem was attacked by different schools of thought, and diverse solutions came up in the form of equations, defended seriously by the authors but viewed skeptically by practitioners. Each algorithm represented an approach and gave an answer different from the others. The answers varied, making common sense the better judge. But the busy project manager liked to have different models on which he would sit in judgment, rather than figuring out all for himself right from scratch. With the help of such algorithms wherever they were available, estimation turned out to be decision making — choosing among the alternatives.

The overall structure of such algorithms, in most cases, took the form shown in Equation 9.2.


Effort = a + b(EV)c (9.2)


where:
a, b, and c = empirically derived constants
Effort = measured in person-months
EV = estimation variable (LOC or FP)

The practice of using algorithms brought possibilities of automation into the estimation process, so far a “manual” process, now transforming into a tool-based process.


Estimation Science: The Early Models

During the past few decades, a large number of estimation equations have come into existence. While the science of estimation was pursued by researchers, practitioners with a pioneering spirit started creating their own local models similar to the 12 presented in Exhibit 4.

Growing dissatisfaction with LOC as an estimator led to the invention of FP and other scientific size measures. Equations used in some of the well-known estimation models are presented here.


Bailey–Basili Model

Effort = 5.5 + 0.73 * (KLOC1.16) (9.3)


Exhibit 4. Prediction equations from limited data.

S. No. y Dependent Variable x Independent Variable Prediction Model %R2
1
2
3
4
5
6
7
8
9
10
11
12
Effort Variance%
In-Process Defect Density (KLOC)
Customer Reported Defects
Productivity FP/PM
Post Delivery Density (Def/100FP)
Defect Density (Def/100FP)
Bad Fixes
Productivity LOC/day
Defect Density (Def/KLOC)
Cost of Bug Fixing (Normalized)
Effort (Hrs)
Actual Days
Requirement Effort%
Design Effort%
In-Process Defects
Requirement Effort%
Review Effort%
Productivity (FP/PM)
Productivity (FP/PM)
Team Size
Size (KLOC)
Review Effectiveness%
Size (FP)
Estimated Days
y = 0.84 x2-21.22x+147.57
y = -1.31x + 36.62
y = 0.35x + 4.13
y = 8.14Ln(x) + 1.38
y = 8.24x0.90
y = 0.05 x2-1.04x + 6.59
y = 0.09 x2-0.47x + 8.06
y = 99.30x-0.59
y = 20.16 x0.38
y = -17.03Ln(x) + 120.4
y= 19.93x - 183.5
y= 1.24x
86.75
53.42
91.32
64.34
73.38
49.38
26.1
42.28
44.18
46.27
39.75
80.58

Doty Model

Effort = 5.288 * (KLOC1.047) (9.4)


Albrecht and Gaffney Model

Effort = -13.39 + 0.0545 * FP (9.5)


Kemerer Model

Effort = 60.62 * 7.728 * 10-8 * FP3 (9.6)


Matson, Barnett, and Mellichamp Model

Effort = 585.7 + 15.12 * FP (9.7)


Watson and Felix Model

Effort = 5.2 * (KLOC0.91) (9.8)


Duration = 4.1 * (KLOC0.36) months (9.9)


Halstead Model

Halstead predicts effort from program volume, a software complexity measure, seen in terms of operators and operands of a program. The effort equation is as follows:


Effort = (n1*N2*V)/(36* n2) (9.10)


where:
V = ( N1 + N 2 ) log2 ( n1 + n2 )
n 1 = number of unique operators in the program
n 2 = number of unique operands in the program
N 1 = number of operator occurrences in the program
N 2 = number of operand occurrences in the program


Putnam’s Model

Putnam’s model is one of the first algorithmic cost models. It is based on the Norden-Rayleigh function and generally known as a macro estimation model for large projects. The Putnam software equation is of the form:


L = C i_Image2K1/3td4/3 (9.11)


where:
K = effort in years
L = the size delivered (SLOC)
C i_Image2= constant that is a function of local conditions
t d = development calendar time in years


Barry Boehm’s COCOMO (Constructive Cost Model)

In 1981, Dr. Barry Boehm announced the basic equations:


Effort = 3.2(Size)1.05 (9.12)


Time = 2.5(Effort)0.38 (9.13)


Several empirical estimation models relating effort to size have been published. These models have been derived from experience but are not universal. Before applying them we must calibrate them and prepare a “calibration curve. ”

The estimation models have grown in scope along with time, having the emerging benefit of a large number of product databases.

The scientific models vied for universal application and aroused keen interest in many tool developers who gobbled up such equations to generate prediction systems. However, the empirically derived “local” models rarely made it to the core of management conscience.


Advent of Parametric Models

The estimation methods discussed so far, from ROM to the scientific equations, have their own uses but are not as reliable as we would like them to be. The need for more-reliable estimates is becoming important.

For this reason, software parametric cost estimating tools have been developed since the late 1970s to provide a better defined and more consistent software estimating process.

In these models, a process is seen from different angles or dimensions. Each dimension is represented by one parameter. Hence, with the help of multiple parameters we get a fuller picture of the process. These parametric models relate the individual parameters to cost. Hence, the cost model becomes naturally more realistic. In addition, the models consider the interactions between the parameters and process nonlinearity, making the model more reliable. These models can be seen as extensions of the simple two-variable models.


Calibration

The calibration procedure is theoretically very simple. Calibration is the process of comparing the output of the model with the actual values. The difficulty or the errors are noted down, from which correction factors can be derived. After calibration, with the help of the correction factors, the model will reach a higher level of accuracy. Every estimation model requires calibration before use. Calibration is in a sense customizing a generic model.

The calibration factor obtained is considered good only if the type of inputs that were used in the calibration runs. For a general total model calibration, a wide range of components with actual costs need to be used. Numerous calibrations should be performed with different types of components in order to obtain a set of calibration factors for the possible expected estimating situations. An example of this is shown in Chapter 7 applications.

An estimation model is as good as calibration. Even the best model is unreliable if it is not calibrated.


COCOMO

One of the most successful estimation models is COCOMO (constructive cost model) from Barry Boehm. The model has been revised and improved in the past 20 years and recently has been published as COCOMO II.2000. This is a model, Dr. Barry Boehm notes, “to help you reason about the cost and schedule implications of software decisions you may need to make. ” The primary objectives of the COCOMO II.2000 are to

  • Provide accurate cost and schedule estimates for both current and likely future software projects
  • Enable organizations to easily recalibrate, tailor, or extend COCOMO II to better fit their unique situations
  • Provide careful, easy-to-understand definitions of the model’s inputs, outputs, and assumptions
  • Provide a constructive model
  • Provide a normative model
  • Provide an evolving model

Here is a list of the major decision situations we have determined that you might want to use COCOMO II for in the future:

  • Making investment or other financial decisions involving a software development effort
  • Setting project budgets and schedules as a basis for planning and control
  • Deciding on or negotiating trade-offs among software cost, schedule, functional, performance, or quality factors
  • Making software cost and schedule risk management decisions.
  • Deciding which parts of a software system to develop, reuse, lease, or purchase
  • Making legacy software inventory decisions about what to modify, phase out, outsource, etc.
  • Setting mixed investment strategies to improve your organization’s software capability, via reuse, tools, process maturity, outsourcing, etc.
  • Deciding how to implement a process improvement strategy

COCOMO II.2000 Parameters

This model still uses the original 1980s equation but cleverly incorporates the influence of several process variables into the equation, without escalating the complexity of Equation 9.12. A significant contribution from the model is the definition of the following parameters or cost drivers.

COCOMO II handles 22 such cost drivers and uses the basic equation (see Exhibit 5).


Levels

The cost drivers need to be recognized by the user and mapped to his project scenario. After recognizing the applicable cost drivers, the impact levels of these drivers must be adjudged. COCOMO II.2000 uses six levels for each driver:

  1. VL = Very Low
  2. L = Low
  3. N = Normal
  4. H = High
  5. VH = Very High
  6. XH = Extra High

Lookup Table

The model proposes impact levels, as given in Exhibit 6, based on the best fit to empirical data. Our intention here is not to cover application scenarios of the model, which will vary according to life cycle phase of the project (even the cost drivers might change accordingly). Rather, we wish to present one example of the lookup table, as given in Exhibit 6, to consider the application of the model for typical decision making.

Exhibit 5. COCOMO II cost drivers.

Scale Factors

  1. Precedentedness (PREC)
  2. Development flexibility (FLEX)
  3. Risk resolution (RESL)
  4. Team cohesion (TEAM)
  5. Process maturity (PMAT)

Effort Multipliers

  1. Required software reliability (RELY)
  2. Database size (DATA)
  3. Product complexity (CPLX)
  4. Developed for reusability (RUSE)
  5. Documentation match to life cycle needs (DOCU)
  6. Execution time constraint (TIME)
  7. Main storage constraint (STOR)
  8. Platform volatility (PVOL)
  9. Analysis capability (ACAP)
  10. Programmer capability (PCAP)
  11. Personnel continuity (PCON)
  12. Applications experience (APEX)
  13. Platform experience (PLEX)
  14. Language and tool experience (LTEX)
  15. Use of software tools (TOOL)
  16. Multisite development (SITE)
  17. Required development schedule (SCED)

Equations

After selecting the cost drivers and choosing from the lookup table the appropriate levels, they may be substituted in the Equation 9.14 and Equation 9.15.


Effort Equation

PM = A * SizeE * Πni=1EMi (9.14)


where:
PM = effort in person month
A = multiplicative constant, 2.94
E = B + 0.01 * Σ5j=1 SF j
B = exponential constant, 0.91
Size = software size in KLOC
EM i = effort multiplier
SF i = scale factor


Exhibit 6. The COCOMO II.2000.

Drivers VL L N H VH XH
Scale factors
Precedentedness
Development flexibility
Risk resolution
Team cohesion
Process maturity

Effort Multipliers
Required software reliability
Database size
Product complexity
Developed for reusability
Document match to life cycle needs
Execution time constraint
Main storage constraint
Platform volatility
Analysis capability
Programmer capability
Personnel continuity
Applications experience
Platform experience
Language and tool experience
Use of software tools
Multisite development
Required development schedule

6.20
5.07
7.07
5.48
6.24


0.82

0.73

0.81



1.41
1.34
1.29
1.22
1.19
1.20
1.17
1.22
1.43

4.96
4.05
5.65
4.38
6.24


0.92
0.90
0.87
0.95
0.91


0.87
1.19
1.15
1.12
1.10
1.09
1.09
1.09
1.09
1.14

3.72
3.04
4.24
3.29
4.68


1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00

2.48
2.03
2.83
2.19
3.12


1.10
1.14
1.17
1.07
1.11
1.11
1.05
1.15
0.85
0.88
0.90
0.88
0.91
0.91
0.90
0.93
1.00

1.24
1.01
1.41
1.10
1.56


1.26
1.28
1.34
1.15
1.23
1.29
1.17
1.30
0.71
0.76
0.81
0.81
0.85
0.84
0.78
0.86
1.00

0.00
0.00
0.00
0.00
0.00




1.74
1.24

1.63
1.46









0.80

Schedule Equation

TDEV = C*(PM)f (9.15)


where:
C = multiplicative constant, 3.67
F = D + 0.2 * 0.01 * Σ5j=1 SFj
or
F = D + 0.2 * (E – B)
D = exponential constant, 0.28
PM = effort in person month

It may be observed that while filling in the equations, we find the sum of all the scale factor influences and the product of all the effort multiplier influences. The scale factor sum together with the exponential constant B controls nonlinearity of the model. Because B is equal to 0.91, when the scale factor sum is equal to 0.09 the combination of both becomes unity (0.91 + 0.09 = 1.00). This represents a transaction point called economy of scales. If the scale factor sum is larger, the model predicts high cost for big projects. If the scale factor sum is smaller, the model predicts benefits of size in terms of cost reduction.

i_Image1

Exhibit 7. COCOMO models.


COCOMO II.2000 Applications

The COCOMO estimation model can be used for decision making in all business processes. Some of them are explained in the following discussion.


Financial Decisions

The first application of the model is to run it for the range of sizes and predict the cost and time behavior, as illustrated in Exhibit 7.

These graphs are dependent on the assumption made in selecting the cost drivers and their influence levels. There is a possibility that when different people run the model, they might come out with different graphs, each representing a scenario. We can ask the “what if” question and explore a wide range of possibilities, especially while preparing budgets and proposals. This also helps in negotiating cost and time requirements with the customers and stakeholders. It also helps in refining our project assumptions.


Trade-Off Decisions

An important decision in projects is to balance conflicting factors. For example, high reliability may be a customer requirement, but the cost and time implication of offering higher reliability might not be objectively

i_Image1

Exhibit 8. Trade-off decisions with COCOMO.

assessed and discussed during the contract negotiations. While it is commonly known that reliability costs money, translating this knowledge into a cost function is tricky. Some people use a rule-of-thumb that doubling the reliability level would cost 50 percent more. But these are wage generalizations. What we need is a specific answer to the additional cost required to achieve an increasing reliability level, taking into account all the associated cost drivers. Another problem we face in this exercise is to quantify the required hike in reliability. Using the COCOMO model, we can simulate the impact of reliability on cost and time, as illustrated in Exhibit 8.

It may be observed that using the simulation run we can predict for a given increment in reliability the corresponding escalation in time in months and escalation of cost in present months.

If constraints exist on cost and time, they can be projected on the simulation run and the feasible level of reliability can be read off from the graph. At this time if the feasible level and the required level are far apart, we encounter a trade-off situation. Either additional cost must be provided or reliability must be traded off, both according to the model forecasts.

COCOMO provides for such trade-off analysis against several product constraints and process realities. Such trade-off calculations help in some of the most critical decision-making movements in the projects.


Risk Management Decisions

Risk perception is yet another critical step in project management. Although risk more often arises from external forces, its impact on cost and schedule have to be known with some level of credibility. Many risk perception tools such as the Risk Exposure Matrix depend on subject assessment and give volatile results. COCOMO allows us to perform risk analysis with reasonable levels of objectivity. For example, let us consider the risk of attrition, because of which programmers with lower capabilities fill in vacancies when experienced programmers leave. Assuming that the drop

i_Image1

Exhibit 9. Estimating the impact of attrition.

in programmer capability (PCAP) levels is from high (H) to low (L), the corresponding cost risk and schedule risk can be estimated, as illustrated in Exhibit 9.

It is also possible from the model to create worst-risk scenarios such as simultaneous drop in all human capabilities affecting at least more than five cost drivers. COCOMO predicts very steep rises in cost and time when capabilities drop. It must be kept in mind that the worst scenarios may have lower probability of occurrence. COCOMO does not deal with the probability issue but only computes the impact. It is left to the user to use other methods for assessing probabilities.

We can perceive risk through almost all the cost drivers and build suitable risk scenarios on COCOMO.


Sensitivity Analysis Decisions

Sensitivity analysis is the study of changes in the results due to small variations in the cost. A small perturbation in the x variable gives rise to a change in the y variable. The ratio y/x is the sensitivity factor.

The cost function is more sensitive to certain cost drivers than others; also, in the case of a nonlinear relationship such as between cost and size, sensitivity varies.

Sensitivity of cost and time on platform volatility (PVOL) is illustrated in Exhibit 10. In this case, the relationship is almost linear and has a static sensitivity factor.

Understanding sensitivity in cost behavior gives a special insight into the economic system and is highly informative input to the project staff.

i_Image1

Exhibit 10. Effect of platform volatility.

i_Image3

Exhibit 11. Decision making for process maturity planning.


Strategic Decision Making with COCOMO

In planning for the long term, we may face questions such as: “what will be the cost benefit if we move from CMM level two to level three in process maturity?” or we may ask ourselves the question: “what are the cost bene-fits in having prior experience?” Except for a feeling that all will go well when capability improves, we do not have dependable numbers to make a decision on investment. COCOMO helps with such strategic decisions. An example is shown in Exhibit 11, which indicates cost benefits from process maturity.

i_Image1

Exhibit 12. Applying COCOMO for HR decisions.


Optimization of Support Processes

A judicious use of estimation models is to combine them with other quantitative results and take an integrated view. For example, COCOMO can be used to estimate the cost advantage of skills. This can be shown as project cost curve. This may be seen together with the cost of hiring skilled people, as shown in Exhibit 12. The two cost trends are in opposite directions, presenting a conflict. If you plot the total cost of both, you may get an optimization curve. Depending on project constraints, we can choose the appropriate point.


Tailoring COCOMO

COCOMO can be tailored to a particular organization in various ways. It can be calibrated to existing project data by adjusting constants A and B; even the impact levels can be readjusted if the data supports. The redundant parameters can be identified for a given project situation and its life cycle phase. This redundancy can be eliminated. The strength of COCOMO is in the fact that it can absorb additional cost drivers into the framework without affecting mathematical formulations. Hence, the user can expand the model to map greater complexities.

COCOMO has all the benefits of a good estimation model. It further demonstrates how an estimation model can be applied to decision making. It allows us to see entire processes in a unified cost perceptive. The model is flexible and adoptive. The model is so transparent that it has inspired the development of several estimation tools.


Estimation System

The estimation model has emerged as a management thinktank. The scope of the models has increased from simple cost estimation to higher-level project concerns. Some of the models combine the basic prediction equations with constraint equations that represent project goals and attempts to solve the total project management system. This breed of estimation models, for example, predicts not only costs but defects as well. They are useful to track and control projects. One such example is SLIM.


SLIM (Software Life Cycle Management)

The estimation system SLIM was developed by Larry Putnam based on the Rayleigh-Norden model given in Equation 9.11. It draws on a database of over 5000 projects. This has been developed as a suite that consists of three modules:

  1. SLIM-Estimate: This tool can be used for estimation of effort required for software and for deciding on the strategy for the design and implementation in terms of suitable trade-off factors such as cycle time, team size, cost, quality, and risk.
  2. SLIM-Control: This tool is meant for project tracking and control using statistical process control to assess project status and highlight areas that need attention.
  3. SLIM-Metrics: This tool builds the repository of projects and performs benchmarking, to use for future estimation and better management of future projects.

SLIM-Estimate

SLIM requires three primary inputs. The first input is the proposed size of the application. SLIM is flexible enough so that any of the popular sizing metrics can be used.

  • Source lines of code
  • Function points
  • Objects
  • Windows
  • Screens
  • Diagrams

The second input is productivity and complexity in three levels of detail. SLIM also determines an appropriate productivity level based on answers to the detailed questions.

The final input is the project constraints, including:

  • The desired schedule
  • The desired budget limit
  • The desired reliability (acceptable mean time to defect) at delivery
  • The minimum staffing required to have the skill mix to get the job done
  • The maximum practical staffing possible

SLIM uses this input information to determine an “optimum” estimate. The optimum estimate is a solution that gives you the highest probability of developing the system within the management constraints that you have specified. If the constraints are too tight, then the optimum estimate will exceed one or a number of your goals. If this is the case, you must evaluate other practical alternatives. These might include scenarios for reduced function products, increased staffing, or improved efficiency. Variations of the basic estimate can be logged so that you can compare the merits of each alternative and make a decision about which estimate is the best.

SLIM presents the results of estimate in an effective and persuasive way. There are 181 different reports and graphs that SLIM can generate. We can select the right ones for presentation from the following major categories:

  • Project description
  • Estimation analysis views
  • Schedule section
  • Risk analysis section
  • Staffing and skill breakout section
  • Effort and cost section
  • Reliability estimate section
  • Documentation section

SLIM can be calibrated with minimum metrics: project size, development time, and effort. Until maintenance and enhancement releases, SLIM covers the complete life cycle. SLIM estimates are extremely sensitive to the technology factor.


Software Sizing Tools

While using estimation models a key input is size. If size estimation tools are available, they would save a lot of time and effort in preparing size data for running the estimation models. Some of the size estimation tools are:

  • ASSET-R
  • CA-FPXpert
  • CEIS
  • SIZEEXPERT
  • SEER-M

Estimation Tools

Many of the estimation models have been computerized and brought out as tools with many useful features. Use of tools makes it easier to run these models and generate reports. Some tools provide special support on calibration as well. Following is a representative list of tools for estimation models:

  • PRICE-S
  • REVIC
  • SASET
  • SEER-SEM
  • SOFTCOST-R and SOFTCOST-ADA
  • The ANGEL Project
  • CoCoPro
  • Construx Estimate 2.0
  • COOLSoft
  • COSMOS
  • Costar
  • SLIM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset