Chapter 8
Process Models

From Analysis to Systems Thinking

Systems thinking is to move from metrics analysis toward synthesis, from single-variable perspective to multi-variable perspective, from single-dimensional consideration to multi-dimensional considerations. Systems thinking involves creating process maps and process models for interactive decision-making, keeping system goals in mind all the way.

A higher-level map is to be created before designing process models. For creating the map, the following four categories of processes in software development may be considered:

  1. Project management processes
  2. Process management processes
  3. Engineering processes
  4. Support processes

All these processes work as a system to make deliveries to the customer. Inputs may be taken from suppliers. The idea of a process map expands to include both the customer and supplier processes. A hierarchy of process models now can be envisaged to represent the selected processes or sub-processes. In the systems approach, the process metrics are applied to build an integrated whole, which relates to business goals and results.


Model Building: Knowledge Consolidation

A model is a representation of the real world. We break a complex problem into manageable parts and use a model to describe how they are linked together. The power of the model is not in the complexity of the mathematics, it is in the way the problem is broken down and organized into a structure.

Each model addresses and hence remains relevant to definite goals, such as decision making or problem solving; each model is based on several assumptions. Due to practical reasons, a model is often built from a local perspective of a global situation. The scope of model application is correspondingly approximate, limited in reach, and local in perspective.

Models are of great practical use, although they are imitations of the real world. Process models are known to be supportive of the following activities:

  • Process management
    • Process capability study
    • Process control
    • Process improvement
    • Process optimization
  • Project management
    • Strategic management
    • Technology management
    • Knowledge management
    • Uncertainty management
  • Forecasting
    • Prediction
    • Risk analysis
    • Estimation
    • Planning
  • Learning
    • Process characterization
    • Process simulation
    • Decision analysis
    • Problem solving
    • Training and learning

Models are knowledge structures created in convenient forms that allow use and reuse by the authors and others. Without model building, the vast array of knowledge elements — scattered across the organization, embedded in the memory cells of humans, present as fragments in records — run the risk of being lost to posterity. Models are legacies from history, extracted from experience waves, created by process innovators. They are also process learning centers.


Theoretical Models: The Soul

The soul of models lie in the conceptual framework which had originally inspired model building, helped in the selection of parameters, influenced the choice of functions, and in the case of discrete modeling helped in deciding on the discrete levels. Theoretical models represent an expected behavior and can be expressed in several forms, including:

  • Verbal description
  • Table
  • Flow chart
  • Diagrams
  • Graphical presentation
  • Simulation methods
  • Linear programming
  • Equation
  • Computer algorithm

Process metrics will be used to denote variables while constructing mathematical models. Semantic measurements are used in other models. Both bring out the inner order of processes.


Basic Empirical Models

Empirical models are built from data. They do not aim to explain but predict process behavior. A combination of exploratory data analysis methods and statistical techniques can be used to build empirical models from metrics data. The value (and complexity) of a model increases with the number of metrics it uses.


Models Using Single Metric (Analytical Models)

Single metric models result from data analysis in time and frequency domains. The most common application of these models lies in establishing baselines and probability curves such as the normal and Rayleigh distributions. These models help very much in process control and process capability analysis. Design of analytical models, interpretation of them, and pattern recognition possibilities from these models have been outlined in Chapters 5 and 6.


Models Using Two Metrics (Regression Models)

Two metrics models — regression models — result from analysis in the relationship domain. In their simplest form, these are scatter plots. We can fit regression curves to these data and generate rigorous models, linear and nonlinear, if needed.


Visual Models

Visual models such as the Radar chart or Pareto chart have the inherent power to present multiple metrics data in one window, more for understanding, and less for prediction. The matrix structure is another form that allows analysis of complex relationships between two sets of variables; matrix cells can be filled with semantic expressions or numeric values, making the matrices either merely visual models or rigorous numerical models. Matrices can switch their levels of rigor and mathematical power.


Decision Support

The basic empirical models fulfill their intended purpose of supporting decision making; examples are listed in Exhibit 1.


Higher-Level Empirical Models

Higher-level empirical models deal with real-life complexities more earnestly. The multivariate treatment of cost by a Constructive Cost Model (COCOMO), presented in Chapter 8, is a noteworthy example. Building such a sophisticated and powerful model requires a great deal of scientific effort, a great deal of data collection and analysis, and a rigorous approach. Building such a model is like building a tool, the project must be sponsored and the cost shared by users.


Exhibit 1. Examples of analytical models for decision making.

Model Name Decision Support Number of
Metrics Used
Effort profile
Defect profile
Defect signature
Matrix
Radar
Pareto chart
Control chart
Trend chart
Moving average
Histogram
Empirical frequency distribution
Regression line
Strategic budgeting
Indicates process maturity
Heips in expecting defects
Multivariate modeling
Influence balancing
Prioritization
Identifies outliers, creates baseline
Forecast
Detect seasonal fluctuations
Process tendencies
Risk prediction, goal setting
 
Creates estimation models, calibration curves
Single
Single
Many
Many
Many
Many
Single
Single
Single
Single
Single
 
Two

We wish to consider those higher-level models which practitioners can build with ease. We need models that can be constructed within project life cycles by project team members. Pragmatism would drive us to parsimonious approaches to model building. This is an area where continuous innovation is in progress. New forms are discovered every day somewhere in some projects. We are glad to present a few examples of such an approach.

Approaches for building parsimonious empirical models at the higher level, which are illustrated in this chapter, are

  • Descriptive statistics on multiple metrics
  • Multiple analysis of single metrics
  • Three analytical dimensions
  • Process diagnostic panel
  • Analytical summary of single metrics
  • Global summary of metrics system
  • Correlation matrix
  • Multiple scatter plots
  • DOE

Pragmatism, economy, and simplicity have been achieved in these models by relying on visual synthesis of analysis results instead of attempting complicated mathematical treatments.


Descriptive Statistics on Multiple Metrics

Descriptive Statistics

A basic but comprehensive treatment of metrics data is achieved by descriptive statistics. Following is a list of estimates presented as descriptive statistics summary by Excel:

  • Mean
  • Standard error
  • Median
  • Mode
  • Standard deviation
  • Sample variance
  • Kurtosis
  • Skewness
  • Range
  • Minimum
  • Maximum
  • Sum
  • Count

This descriptive statistics model examines, by means of its components, several key aspects of process behavior including:

  • Central tendency
  • Variation
  • Bias

In all, these statistics characterize a process and can be called a single-variable rudimentary model. The nuances and anomalies in process behavior seen through a single metric window is brought out in this model.


Building a Multiple Metrics Model

To build a multiple metrics model, we need to choose a set of metrics that we think will characterize the process chosen for modeling. For example, let us see how we can build a process model of the descriptive statistics kind using available core metrics.

We begin with a goal of characterizing the business process using the following critical factors selected and presented here:

  • Growth
  • Customer satisfaction
  • Profit
  • Excellence in software engineering
  • Human resource
  • Productivity
  • Quality
  • Fixed assets performance

Business performance involves eight factors, the first three representing results and the remaining five denoting what causes the results.

Then we will proceed to select metrics that capture each of the critical factors. We may have to design new metrics if we do not already have the required metrics. A practical method is to choose from the existing metrics plan. In this example, the following metrics have been selected from a running metrics system:

  • Market share%
  • Customer satisfaction index (scale 0 to 10)
  • Rework
  • Effort variance%
  • Schedule variance%
  • Size variance%
  • Review effectiveness
  • Absentees%
  • Productivity
  • Defect density
  • Downtime of assets

One can map these metrics to the goal factors and ensure that the right metrics have been selected. Sometimes we cannot find perfect mapping; some metrics may be weakly coupled to the goal factors. Then we have to balance the cost of defining a new metric and collecting additional data against the benefits.

We can get descriptive statistics for each metric. Only seven statistics have been chosen for this modeling. If these 7 statistical estimates can be compiled for all the 11 metrics and shown in the format given in Exhibit 2, then we build a higher level model. This model gives an 11-metric snapshot of business performance and makes it easier for the user to make judgments. Seeing all the metrics statistics in a single framework promotes strategic views. Snapshots of the previous years can be compared with the present. Cognitive perception of the multi-metrics data may yield clues to strengths and weaknesses.

Similar models can be created for different levels of the organization. There is a possibility of building an exclusive model for each of the support processes, supplier processes, engineering processes, etc. Hierarchy in such models follows hierarchy in the metrics plan.


Exhibit 2. Multiple metrics model using descriptive statistics.

Metric Descriptive Statistics
Mean Median Mode Std. Dev. Max. Min. Range
Market share
Customer satisfaction index
Effort variance
Schedule variance
Size variance
Defect density
Review effectiveness
Productivity
Rework
Downtime of assets
Absentees (percent)

Multiple Analysis of Single Metrics

In this model, we take a single metric and perform multiple analysis to understand and illustrate a complex process behavior. Let us consider measurement of effort in software components. From this measurement alone, we can develop the most commonly used metric called effort variance% by calculating normalized deviation from budgeted effort. If we choose to analyze this metric in the time domain, we can perform at least the following four analyses, instead of stopping with the control chart:

  1. Run chart
  2. Linear trend
  3. Moving average trend
  4. Control charts with UCL and LCL

Each analysis presents a certain view of the process in time domain. Exhibit 3 illustrates a pack of four time series analysis graphs. The run chart reveals a broad process behavior. The linear trend chart captures a heavily averaged process trend, useful in strategic forecasting. The moving average trend chart registers the slow local variations, and cautions if any systematic trends exist. The control chart serves dual purposes. On one hand, it serves as a baseline from which one can expect process performance; on the other hand, based on the defined LCL and UCL, it points to outliers for root cause analysis and corrective action.

Project ID Estimated Effort Actual Effort Effort Variance %
P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
P12
P13
P14
P15
P16
P17
P18
P19
P20
P21
P22
P23
P24
P25
P26
P27
P28
P29
P30
250
271
85
290
210
165
200
230
90
156
153
65
186
268
65
65
175
350
286
65
253
260
360
220
350
158
225
350
400
55
305
321
100
319
234
186
238
260
116
266
258
105
223
344
100
92
258
438
382
289
316
306
472
286
452
207
285
404
499
118
i_Image2 22.0
18.5
17.6
10.0
11.4
12.7
19.0
13.0
28.9
70.5
68.6
61.5
19.9
28.4
53.8
41.5
47.4
25.1
33.6
344.6
24.9
17.7
31.1
30.0
29.1
31.0
26.7
15.4
24.8
114.5
i_Image2

μ 43.1

σ 61.2

Exhibit 3. Time series analysis graphs.


Seeing the Meaning

For good interpretation, effort values are assumed to be expressions of project cost, having a strong bearing on human resource utilization. The effort variance metric, seen through the four graphs, therefore emerges with the following meanings:

  • Resource utilization
  • Budget
  • Estimation accuracy
  • Implementation commitment
  • Organizational learning

In the first place, we expect a learning curve. Events after events are passing by, and the most natural process in a human system is experiential learning by repeatedly doing things. Learning is expected in financial control and estimation capability. Both these expectations demand a trend where effort variance steadily falls with time.

The possibility of seasonal variations also catches our eye from the moving average model. A point to be considered before dismissal, possibly.

The control charts bring to our attention the outliers that have crossed the border — the threshold limits. We question whether the organization has seen and responded to the extreme deviations. We also wonder whether proper goals have been set at the process level, and the disadvantages of not meeting the goals have been communicated with clarity.


Creating Additional Metrics from the Same Data

Additional metrics can be created from the same data, at no extra cost of data collection. In the current example, instead of normalizing effort variations by the estimated effort, we can compute the absolute value of effort escalation (or cost escalation).

Percentage variation is one thing, absolute escalation is another. Large percentages in small amounts may be less significant than small percentages in large amounts. Creating a second metric as a cost function in Exhibit 4 presents a completely different picture. The outliers are different, the warning signals occur at different points of time. The cost function (absolute value of effort escalation) shows a serious financial problem that is not highlighted in the traditional metric.

If such a thing as creating the second or third meaningful metric from the same data were possible, by all means we should extend the model to include the new perspective.

Exhibit 3 and Exhibit 4 illustrate two different approaches in creating multiple analysis graphs from the same data.


Three Analytical Dimensions

Process behaves in three dimensions: frequency, time, and relationship. Process is to be felt and sensed in the three orthogonal dimensions, shown in Exhibit 5. This is a fundamental concept in process analysis. Seeing in a single dimension lacks depth and misses out on many precious details. In the experience of the author, there is no process problem that lies outside this three-dimensional analytical universe. Dimensions of even the most complex problem can be reduced to just these three meaningful sets. For example, software productivity, defined in the simplest possible style as the ratio of effort to size, is analyzed in three dimensions, and the results are shown as a composite picture in Exhibit 6, which consists of the following graphs:

  • The time series baseline
  • Frequency distribution showing two modes
  • A scatter plot between size and productivity
Measurement Metric 1 Metric 2
Project ID Estimated Effort Actual Effort Effort Variance Effort Escalation
P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
P12
P13
P14
P15
P16
P17
P18
P19
P20
P21
P22
P23
P24
P25
P26
P27
P28
P29
P30
250
271
85
290
210
165
200
230
90
156
153
65
186
268
65
65
175
350
286
65
253
260
360
220
350
158
225
350
400
55
305
321
100
319
234
186
238
260
116
266
258
105
223
344
100
92
258
438
382
289
316
306
472
286
452
207
285
404
499
118
i_Image2 22.0
18.5
17.6
10.0
11.4
12.7
19.0
13.0
28.9
70.5
68.6
61.5
19.9
28.4
53.8
41.5
47.4
25.1
33.6
344.6
24.9
17.7
31.1
30.0
29.1
31.0
26.7
15.4
24.8
114.5
55
50
15
29
24
21
38
30
26
110
105
40
37
76
35
27
83
88
96
224
63
46
112
66
102
49
60
54
99
63
i_Image2

μ      43.1
σ      61.2

Exhibit 4. Control charts: two metrics derived from same data (different messages).

i_Image5

Exhibit 5. Three dimensions of metrics.

ID Effort
PM
Size
DSLOC
Productivity
DSLOC/PM
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
1.0
2.3
0.9
0.2
1.2
1.2
0.2
0.6
0.5
1.0
1.1
1.0
1.3
2.0
4.0
3.8
2.7
0.8
1.1
1.1
1.0
0.8
1.0
1.0
3.9
0.9
1.2
1.4
300
450
200
24
232
435
22
56
43
124
321
345
455
645
1002
945
730
230
435
354
322
244
343
321
945
349
284
463
300.00
195.65
222.22
120.00
193.33
362.50
110.00
93.33
86.00
124.00
291.82
345.00
350.00
322.50
250.50
248.68
270.37
287.50
395.45
321.82
322.00
305.00
343.00
320.00
242.31
387.78
236.67
330.71
i_Image2

Exhibit 6. Composite view from three analytical dimensions.

The time series baseline gives a first-order picture of the process, which is centered on mean and sigma. The probability curve projects the most common behavior. The scatter plot finds a governing relationship showing how size influences productivity. These three graphs present the process capability, the goal, and a process constraint, respectively. Seeing the three graphs together and comparing the messages we detect a hidden process problem.

The frequency diagram shows a tendency, the dominant one looks worthy of becoming an aggressive goal. Without seeing the frequency diagram, if one sets productivity goal there could be a critical error of setting inferior goals because the mean of baseline is very much lower than the dominant mode.

The productivity metric is decomposed, and the relationship patterns between the elemental factors are exposed. Scatter plots can be drawn between chosen pairs. Here the goal (productivity) and a driver (size) are studied.

The scatter diagram reveals a more-basic constraint: it shows a nonlinear dependency of productivity on size. The productivity initially increases with size, reaches a saturation point, and then drops beyond imagination. Perhaps the productivity goal depends on size.

Viewed independently, the graphs would not have highlighted the problem — and educated the viewer. Visual synthesis helps in discovering unseen problems.

In this example, problem discovery can lead to the following benefits:

  • Perfection in goals setting
  • Precision in metrics data collection
  • More effective resource allocation

Process decisions are best taken from a composite view of all three dimensions. Thus visual synthesis of the graphical analysis is easy and effective in achieving a balanced judgment of the process under study.


Process Diagnostic Panel

Analytical views, lower-level models, and graphs all can be combined to form a diagnostic panel — a super model to represent complex processes, such as the support process illustrated in Exhibit 7.


Where Mathematical Solutions Are Messy

Creating a mathematical model for the entire collection of support processes is very complicated. The support processes constitute the environment in which core processes function, a systems model for the environment has to cope up with two difficult tasks:

i_Image1

Exhibit 7. Diagnostic panel for support processes.

  • Developing an objective function
  • Integrating exclusive elements

Formulating an osmosis model for influence of the environment on the core function; this model depends on very abstract factors that shape the organizational climate as well.

Erecting constraint equations for this model poses additional difficul-ties. The boundary value functions are blurred with statistical uncertainty terms. It will turn out that the support process is, in scientific terms, an ill-defined problem that defies classical solutions. An approach is to use discrete modeling of a continuous process and use finite element methods to solve the simultaneous equations. Instead of a rigorous effort first to formulate a scientific problem and then to solve the same by finite element methods or neural networks, we propose the construction of a diagnostic panel, as illustrated in Exhibit 7.


Heuristic Run

The diagnostic panel can be designed to provide dynamically changing views when different metrics are chosen. Given the fact that each of the support processes may run a dozen metrics or more, dynamic choice makes the panel interactive. Using the five-element panel shown in Exhibit 7 and by making heuristic displays by dynamic changes in metrics choice, we can analyze the whole, frame by frame. Each run results in a frame. And one can go through several runs until we succeed in generating a mental model.


Exhibit 8. Analytical summary of single metrics.

Analysis Methods Analysis
Statistics
Baseline Value Inferences
Control chart Mean
UCL (1 sigma) Outliers….
LCL (1 sigma) Outliers….
Linear trend Equation Forecast (next event)
(R2) Confidence Level 80 percent
Moving average Window:5 Forecast (next event)
Frequency Mode (Assigning the goal, capability, and risk percentage)
Capability (percent)
Risk (percent)


Exploratory Data Analysis (EDA)

A diagnostic panel, as a super model, requires an intelligent design of metrics database and use of exploratory data analysis (EDA) methods to present dynamic views.


Analytical Summary of Single Metric

Analytical summary tables are models that reduce large numbers in statistical analysis to a brief table. These summaries reduce the amount of detail one has to go through, and avoid problems of interpretation due to information overload. The table uses agreed upon and familiar symbols and statistical terms to summarize findings.

The summary table, as a process model, allows us to focus on crisp messages that have been distilled from data. Results without messages have already been excluded from the table in the preparatory rounds of message filtering.

In Exhibit 8, we present this tabular model with two dimensions of process perception. If the metric happens to be complex and needs to be decomposed, then the third dimension of relationship can be added. The table can be suitably expanded.

The process model template is meant to establish baselines in pertinent dimensions, and keeps interpretation as a model element (may be subjective element). This model of representing processes is ideally suited for the following type processes:

  • Core processes
  • Critical processes
  • Cost management

Sometimes messages drawn could show signs of inconsistency. A unique final message may not appear in the table. The summary table is not a final model and certainly not a conclusive summary. It is a convenient compilation of scattered messages.


Global Summary

We now attempt to reduce the entire project management scenario to a single tabular model. A global summary table of core metrics, as shown in Exhibit 9, would achieve just that. Each row in the table is dedicated to a local model seen through one metric. All the core metrics are covered, row by row. When we are through with all the rows, we would have scanned the entire situation. It may be noticed that each row in the global summary in Exhibit 9 represents the entire analytical summary of the single metric shown in Exhibit 8.

The global model also includes goals that drive the process in the last column. The model is now very rich and very comprehensive, by juxtaposing statistical behavior with management intent. The global summary table includes SPC models, risk models, capability models, forecasts, trends, and process tendencies, all seen in the light of goals.

This tabular model is a worthy addition to process baseline reports published every quarter by Software Engineering Process Group (SEPG) in organizations. The global summary helps in seeing the following at a glance:

  • Process performance summary in three dimensions
  • Goal tracking
  • Internal benchmarking
  • Performance trends and forecasting

Above all, the global summary table will perform as a fact finder to the CEO, and can also provide objective vision.

Exhibit 9. Global summary.

Sl. No. Metric Control Chart Trend Chart Frequency Analysis Goal
Mean UCL LCL R2 Forecast (Linear) Forecast Mov. Avg. Mode Capability (percent) Risk (percent)
1 M1
2 M2
3 M3
4 M4
5 M5
6 M6
7 M7
8 M8
9 M9
10 M10

Exhibit 10. Correlation matrix.

M1 M2 M3 M4 M5 M6 M7 M8 M9
M1 r11 r12 r13 r14 r15 r16 r17 r18 r19
M2 r22 r23 r24 r25 r26 r27 r28 r29
M3 r33 r34 r35 r36 r37 r38 r39
M4 r44 r45 r46 r47 r48 r49
M5 r55 r56 r57 r58 r59
M6 r66 r67 r68 r69
M7 r77 r78 r79
M8 r88 r89
M9 r99

Process Correlations

Correlation studies, as seen in Chapter 7, help us to study the relationship between metrics, taken two by two. But process results are influenced simultaneously by several process variables, and hence, by several metrics. Multivariate analysis pending, we can have a bird’s eye view of process relationships by analyzing correlation between pairs of metrics and arranging them in the form of a correlation matrix.

The correlation matrix is a relationship model, the values in each cell represent the strength (on a scale of 0 to 1) and type (positive or negative) of relationship. The format for correlation matrix is illustrated in Exhibit 10 for a set of nine chosen metrics. The cells are filled with correlation coeffi-cients rmn, which define the relationship between associated metrics. Process correlation models of this kind have been used also as higher-level process diagnostic tool, as in quality function deployment (QFD) system. Basically, a correlation matrix reveals conflicts and connectedness that exist in the process. It also serves as a gap analysis, by exposing unexpected and unhealthy relationships.

The correlation matrix structure is quite elastic and can accommodate as many metrics as we choose, virtually allowing large degrees of freedom in model building.


Multiple Scatter Plots

More than capturing strengths of relationships in the process, we may have to establish mathematical relationships involving all the metrics. Multivariate model building perhaps requires special efforts and budget not generally available in project environment. However, a nearly equivalent model can be composed using scatter plots using the structure illustrated in Exhibit 11. Four metrics are considered in the multiple scatter plot model:

169

i_Image1

Exhibit 11. Multiple scatter plots.

  1. Defect density
  2. Effort variance
  3. Schedule variance
  4. Size

The purpose of this model is to establish defect drivers in the process and see defect correlations with the core metrics. The standard search algorithms seek out to establish dependencies and associations between the ordered pairs of metrics, as shown in Exhibit 12. Six regression models have been constructed and made as a composite model in Exhibit 11.

Multiple scatter plot facility is available in many data analysis tools and can be easily built into the spreadsheet. Using the drop-and-drag facility, the dataset can be changed to view different scatter plots. By iterative interactions, we can understand relationship patterns in the process. These plots provide insight into the process chemistry.

Exhibit 12. Dependencies and associations between ordered pairs of metrics.

Dependent Variable Independent Variable
Defect density
Defect density
Defect density
Effort variance
Effort variance
Schedule variance
Effort variance
Schedule variance
Size
Size
Schedule variance
Size

Design of Experiment (DOE)

Building Models from Experimental Data

The models we have seen thus far are based on data collected by an ongoing metric system from natural observation posts. The data comes from an economically designed system, perhaps to suit the broader needs of the organization.

When we take up process studies that are specials tasks — temporary tasks, from the project view — that have been announced to solve a special problem, we may find that the available data may not be enough. Instead of changing the metrics system to acquire this data, suggesting permanent cost burden, we will resort to doing experiments to collect the data to meet the purpose at hand.

Experiments are also done under controlled conditions, assuring better quality and consistency in data. Scientifically designed, these experiments will help to build capable models.

The models built from DOE are expressions of relationship, which can be plotted and visually interpreted.


Design of Experiments

An experimental design is the set of plans and instructions by which the data in an experiment is collected. A design of experiments is a standard statistical technique used for simultaneous evaluation of two or more parameters that influence the resultant system performance and variability.

The design of experiment technique is especially useful when there is the need to optimize a process that can involve interactions and effects of several variables at several levels and an absence of concrete information.

Design of experiment gives fast and pragmatic approach to the optimization of processes.


Approach to Experiments

  • State the problem or area of concern.
  • State the objective of the experiment.
  • Select the quality characteristic and measurement system.
  • Identify control and noise factors.
  • Select levels for the factors.
  • Select interactions that may influence the selected quality characteristics.
  • Analyze and interpret results of the experimental trials.
  • Conduct confirmation experiment.

Models from DOE

DOE yields process models that have considered the simultaneous influ-ence of several factors. These models capture process behavior more exactly than the models we have seen. What is more, DOE offers economical ways of doing the experiment, without losing quality of results.

Such models can be easily created for products. For modeling processes in a project, experiments are not always feasible. A project is a one-shot process, repetition is rare. Perhaps DOE can be used when new processes go through pilot runs — which can be thought of as experimental runs. During these runs, a temporary metrics plan must be drafted to support DOE.

While we take pains to make sure that factors are changed simultaneously in DOE, not one at a time, we must realize that in practical situations factors change in a similar way, naturally. Hence, natural observations yield truer pictures than simulated environments.

The wealth of naturally collected metrics data is awaiting model building, before we jump to experiments.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset