Appendix V: Traditional IT Metrics Reference

Product and Process

Sample product metrics include

  1. Size: Lines of code, pages of documentation, number and size of test, token count, function count
  2. Complexity: Decision count, variable count, number of modules, size/volume, depth of nesting
  3. Reliability: Count of changes required by phase, count of discovered defects, defect density = number of defects/size, count of changed lines of code

Sample process metrics include

  1. Complexity: Time to design, code and test, defect discovery rate by phase, cost to develop, number of external interfaces, defect fix rate
  2. Methods and tool use: Number of tools used and why, project infrastructure tools, tools not used and why
  3. Resource metrics: Years experience with team, years experience with language, years experience with type of software, MIPS per person, support personnel to engineering personnel ratio, nonproject time to project–time ratio productivity: percentage of time to redesign, percentage of time to redo, variance of schedule, variance of effort

Once the organization determines the slate of metrics to be implemented, it must develop a methodology for reviewing the results of the metrics program. Metrics are useless if they do not result in improved quality and/or productivity. At a minimum, the organization should

  1. Determine the metric and measuring technique
  2. Measure to understand where you are
  3. Establish worst, best, planned cases
  4. Modify the process or product depending on the results of measurement
  5. Remeasure to see what has changed
  6. Reiterate

Traditional Configuration Management Metrics

The following metrics are typically used by those measuring the configuration management (CM) process:

  1. Average rate of variance from scheduled time.
  2. Rate of first pass approvals.
  3. Volume of deviation requests by cause.
  4. The number of scheduled, performed, and completed configuration management audits by each phase of the life cycle.
  5. The rate of new changes being released and the rate that changes are being verified as completed. History compiled from successive deliveries is used to refine the scope of the expected rate.
  6. The number of completed versus scheduled (stratified by type and priority) actions.
  7. Man-hours per project.
  8. Schedule variances.
  9. Tests per requirement.
  10. Change category count.
  11. Changes by source.
  12. Cost variances.
  13. Errors per thousand lines of code (KSLOC).
  14. Requirements volatility.

Process Maturity Framework Metrics

The set of metrics in this section is based on a process maturity framework developed at the Software Engineering Institute (SEI) at Carnegie Mellon University. The SEI framework divides organizations into five levels based on how mature (i.e., organized, professional, aligned to software tenets) the organization is. The five levels range from initial, or ad hoc, to an optimizing environment. Using this framework, metrics should be divided into five levels as well. Each level is based on the amount of information made available to the development process. As the development process matures and improves, additional metrics can be collected and analyzed, as shown in Table A5.1.

Table A5.1 Relationship of Software Measures to Process Maturity

MATURITY LEVEL MEASUREMENT FOCUS APPLICABLE CORE MEASURES
1 Establish baselines for planning and estimating project resources and tasks Effort, schedule progress (pilot or selected projects)
2 Track and control project resources and tasks Effort, schedule progress (project by project basis)
3 Define and quantify products and processes within and across projects Products: size, defects Processes: effort, schedule (compare above across projects)
4 Define, quantify, and control subprocesses and elements Set upper and lower statistical control boundaries for core measures. Use estimated vs actual comparisons for projects and compare across projects.
5 Dynamically optimize at the project level and improve across projects Use statistical control results dynamically within the project to adjust processes and products for improved success.

Level 1: Initial process. This level is characterized by an ad hoc approach to software development. Inputs to the process are not well defined but the outputs are as expected. Preliminary baseline project metrics should be gathered at this level to form a basis for comparison as improvements are made and maturity increases. This can be accomplished by comparing new project measurements with the baseline ones.

Level 2: Repeatable process. At this level, the process is repeatable in much the same way that a subroutine is repeatable. The requirements act as input, the code as output, and constraints are such things as budget and schedule. Even though proper inputs produce proper outputs, there is no means to easily discern how the outputs are actually produced. Only project-related metrics make sense at this level since the activities within the actual transitions from input to output are not available to be measured. Measures at this level can include

  1. Amount of effort needed to develop the system
  2. Overall project cost
  3. Software size: noncommented lines of code, function points, object and method count
  4. Personnel effort: actual person-months of effort, report
  5. Person months of effort
  6. Requirements volatility: number of requirements changes

Level 3: Defined process. At this level, the activities of the process are clearly defined. This additional structure means that the input to and output from each well-defined functional activity can be examined, which permits a measurement of the intermediate products. Measures include

  1. Requirements complexity: number of distinct objects and actions addressed in requirements
  2. Design complexity: number of design modules, cyclomatic complexity
  3. Code complexity: number of code modules, cyclomatic complexity
  4. Test complexity: number of paths to test, of object-oriented development then number of object interfaces to test
  5. Quality metrics: defects discovered, defects discovered per unit size (defect density), requirements faults discovered, design faults discovered, fault density for each product
  6. Pages of documentation

Level 4: Managed process. At this level, feedback from early project activities is used to set priorities for later project activities. Activities are readily compared and contrasted, and the effects of changes in one activity can be tracked in the others. At this level, measurements can be made across activities and are used to control and stabilize the process so that productivity and quality can match expectation. The following types of data are recommended to be collected. Metrics at this stage, although derived from the following data, are tailored to the individual organization.

  1. Process type: What process model is used and how is it correlating to positive or negative consequences?
  2. Amount of producer reuse: How much of the system is designed for reuse? This includes reuse of requirements, design modules, test plans, and code.
  3. Amount of consumer reuse: How much does the project reuse components from other projects? This includes reuse of requirements, design modules, test plans, and code. (By reusing tested, proven components effort can be minimized and quality can be improved).
  4. Defect identification: How and when are defects discovered? Knowing this will indicate whether those process activities are effective.
  5. Use of defect density model for testing: To what extent does the number of defects determine when testing is complete? This controls and focuses testing as well as increases the quality of the final product.
  6. Use of configuration management: Is a configuration management scheme imposed on the development process? This permits traceability, which can be used to assess the impact of alterations.
  7. Module completion over time: At what rates are modules being completed? This reflects the degree to which the process and development environment facilitate implementation and testing.

Level 5: Optimizing process. At this level, measures from activities are used to change and improve the process. This process change can affect the organization and the project as well.

IEEE-Defined Metrics

The Institute of Electrical and Electronics Engineers (IEEE) standards were written with the objective to provide the software community with defined measures currently used as indicators of reliability. By emphasizing early reliability assessment, this standard supports methods through measurement to improve product reliability.

This section presents a subset of the IEEE standard easily adaptable by the general IT community.

  1. Fault density. This measure can be used to predict remaining faults by comparison with expected fault density, determine if sufficient testing has been completed, and establish standard fault densities for comparison and prediction.

    Fd = F/KSLOC

    where:

       F = total number of unique faults found in a given interval resulting in failures of a specified severity level

    KSLOC = number of source lines of executable code and nonexecutable data declarations in thousands

  2. Defect density. This measure can be used after design and code inspections of new development or large block modifications. If the defect density is outside the norm after several inspections, it is an indication of a problem.
    DD=i=1IDiKSLOD
    where:

       Di = total number of unique defects detected during the ith design or code inspection process

       I = total number of inspections

    KSLOD = in the design phase, this is the number of source lines of executable code and nonexecutable data declarations in thousands

  3. Cumulative failure profile. This is a graphical method used to predict reliability, estimate additional testing time to reach an acceptable reliable system, and identify modules and subsystems that require additional testing. A plot is drawn of cumulative failures versus a suitable time base.
  4. Fault-days number. This measure represents the number of days that faults spend in the system from their creation to their removal. For each fault detected and removed, during any phase, the number of days from its creation to its removal is determined (fault-days). The fault-days are then summed for all faults detected and removed, to get the fault-days number at system level, including all faults detected and removed up to the delivery date. In those cases where the creation date of the fault is not known, the fault is assumed to have been created at the middle of the phase in which it was introduced.
  5. Functional or modular test coverage. This measure is used to quantify a software test coverage index for a software delivery. From the system’s functional requirements, a cross-reference listing of associated modules must first be created.

    functional(modular)test=FE

    coverage index FT

    where:

    FE = number of software functional (modular) requirements for which all test cases have been satisfactorily completed

    FT = total number of software functional (modular) requirements

  6. Requirements traceability. This measure aids in identifying requirements that are either missing from, or in addition to, the original requirements.
    TM=R1R2×100%
    where:

    R1 = number of requirements met by the architecture

    R2 = number of original requirements

  7. Software maturity index. This measure is used to quantify the readiness of a software product. Changes from a previous baseline to the current baselines are an indication of the current product stability.

    SMI=MT(Fa+Fc+Fdel)MT

    where:

    SMI = software maturity index

      MT = number of software functions (modules) in the current delivery F

       Fa = number of software functions (modules) in the current delivery that are additions to the previous delivery

       Fc = number of software functions (modules) in the current delivery that include internal changes from a previous delivery

      Fdel = number of software functions (modules) in the previous delivery that are deleted in the current delivery

    The software maturity index may be estimated as:

    SMI=MTFcMT
  8. Number of conflicting requirements. This measure is used to determine the reliability of a software system resulting from the software architecture under consideration, as represented by a specification based on the entity-relationship-attributed model. What is required is a list of the systems inputs, its outputs, and a list of the functions performed by each program. The mappings from the software architecture to the requirements are identified. Mappings from the same specification item to more than one differing requirement are examined for requirements inconsistency. Additionally, mappings from more than one spec item to a single requirement are examined for spec inconsistency.
  9. Cyclomatic complexity. This measure is used to determine the structured complexity of a coded module. The use of this measure is designed to limit the complexity of the module, thereby promoting understandability of the module.

    C = E − N + 1

    where:

    C = complexity

    N = number of nodes (sequential groups of program statements)

    E = number of edges (program flows between nodes)

  10. Design structure. This measure is used to determine the simplicity of the detailed design of a software program. The values determined can be used to identify problem areas within the software design.

    DSM=i=16WiDi

    where:

    DSM = design structure measure

     P1 = total number of modules in program

     P2 = number of modules dependent on input or output

     P3 = number of modules dependent on prior processing (state) P4 = number of database elements

     P5 = number of nonunique database elements

     P6 = number of database segments

     P7 = number of modules not single entrance/single exit

    The design structure is the weighted sum of six derivatives determined by using the aforementioned primitives.

    • D1 = designed organized top-down
    • D2 = module dependence (P2/P1)
    • D3 = module dependent on prior processing (P3/P1)
    • D4 = database size (P5/P4)
    • D5 = database compartmentalization (P6/P4)
    • D6 = module single entrance/exit (P7/P1)

    The weights (Wi) are assigned by the user based on the priority of each associated derivative. Each Wi has a value between 0 and 1.

  11. Test coverage. This is a measure of the completeness of the testing process from both a developer and user perspective. The measure relates directly to the development, integration, and operational test stages of product development.
    TC(%)=(implemented capabilities)(required capabilities)×(program primitives tested)(total program primitives)×100%
    where program functional primitives are either modules, segments, statements, branches, or paths; data functional primitives are classes of data; and requirement primitives are test cases or functional capabilities.
  12. Data or information flow complexity. This is a structural complexity or procedural complexity measure that can be used to evaluate: the information flow structure of large-scale systems, the procedure and module information flow structure, the complexity of the interconnections between modules, and the degree of simplicity of relationships between subsystems, and to correlate total observed failures and software reliability with data complexity.

    weighted IFC = length × (fanin × fanout)2

    where:

    • IFC = information flow complexity
    • fanin = local flows into a procedure + number of data structures from which the procedures retrieves data
    • fanout = local flows from a procedure + number of data structures that the procedure updates
    • length = number of source statements in a procedure (excluding comments)

    The flow of information between modules and/or subsystems needs to be determined either through the use of automated techniques or charting mechanisms. A local flow from module A to B exists if one of the following occurs:

    1. A calls B
    2. B calls A and A returns a value to B that is passed by B
    3. Both A and B are called by another module that passes a value from A to B.

  13. Mean time to failure. This measure is the basic parameter required by most software reliability models. Detailed record keeping of failure occurrences that accurately tracks time (calendar or execution) at which the faults manifest themselves is essential.
  14. Software documentation and source listings. The objective of this measure is to collect information to identify the parts of the software maintenance products that may be inadequate for use in a software maintenance environment. Questionnaires are used to examine the format and content of the documentation and source code attributes from a maintainability perspective.

    The questionnaires examine the following product characteristics:

    1. Modularity
    2. Descriptiveness
    3. Consistency
    4. Simplicity
    5. Expandability
    6. Testability

    Two questionnaires, the software documentation questionnaire and the software source listing questionnaire, are used to evaluate the software products in a desk audit.

    For the software documentation evaluation, the resource documents should include those that contain the program design specifications, program testing information and procedures, program maintenance information, and guidelines used in the preparation of the documentation. Typical questions from the questionnaire include

    1. The documentation indicates that data storage locations are not used for more than one type of data structure.
    2. Parameter inputs and outputs for each module are explained in the documentation.
    3. Programming conventions for I/O processing have been established and followed.
    4. The documentation indicates the resource (storage, timing, tape drives, disks, etc.) allocation is fixed throughout program execution.
    5. The documentation indicates that there is a reasonable time margin for each major time-critical program function.
    6. The documentation indicates that the program has been designed to accommodate software test probes to aid in identifying processing performance.

    The software source listings evaluation reviews either high-order language or assembler source code. Multiple evaluations using the questionnaire are conducted for the unit level of the program (module). The modules selected should represent a sample size of at least 10% of the total source code. Typical questions include

    1. Each function of this module is an easily recognizable block of code.
    2. The quantity of comments does not detract from the legibility of the source listings.
    3. Mathematical models as described/derived in the documentation correspond to the mathematical equations used in the source listing.
    4. Esoteric (clever) programming is avoided in this module.
    5. The size of any data structure that affects the processing logic of this module is parameterized.
    6. Intermediate results within this module can be selectively collected for display without code modification.

Selected Performance Metrics

There are a wide variety of performance metrics that companies use. In this section, we will list some actual metrics from selected organizations surveyed and/or researched for this book. The reader is urged to review Appendix III, which lists a wealth of standard IT metrics, and Appendix XII, which discusses how to establish a software measure program within your organization.

Distribution Center

  1. Average number of orders per day
  2. Average number of lines (SKUs) per order
  3. Picking rate by employee (order lines/hour) by storage zone [picking off some of the automated equip is different than off shelves]
  4. Average freight cost
  5. Number of errors by employee

On a monthly basis:

  1. Volume of inbound freight (SKUs and $ cost) by week
  2. Volume of outbound freight (SKUs and $ cost) by week
  3. Volume of repackaged goods (work orders) by week
  4. Comparison of in-house repackaged goods versus outsourced to compare efficiencies
  5. Cycle count $ cost variance (to check if things are disappearing at a higher rate than normal)
  6. Average Shipping time to customer (these reports are provided by trucking carriers)
  7. Number of returns versus shipments
  8. Transcontinental shipments (we have two warehouses; the California warehouse should ship to Western customers and East Coast to eastern customers—this tells us when inventory is not balanced)

For bonuses, employees track (monthly):

  1. Expense control
  2. Revenue
  3. Accounts receivable turns
  4. Inventory turns

Software Testing

  1. Number of projects completed.
  2. Number of projects cancelled during testing.
  3. Number of defects found. This is further broken down into categories of defects, such as major defects (software will not install or causes blue screen) and minor/cosmetic defects (text in message box is missing). These numbers are put into a calculation that shows how much money we saved the company by catching defects before they were found in production.
  4. Number of new projects started. (Shows expected workload for next month.)
  5. Number of projects not completed/carried over to next month. (This shows if we are staying current with work. For example, if we started 50 new projects this month, and completed 20, we are carrying 30 projects to next month. Typically, this number is constant each month, but will increase if we encounter a number of difficult projects. The value of this metric is only meaningful compared with the number of new requests, number of projects completed, and number of requests carried forward in previous months.)

Product Marketing

  1. New customers over multiple periods
  2. Lost customers over multiple periods
  3. Customer retention percentage
  4. Product quality: total defects
  5. Technical support: number of calls per product
  6. Product life cycle: time from requirements to finished product, percentage of original requirements implemented, number of out-of-scope requirements
  7. Sales support: number of nonsales resources supporting the channel, number of resources time hours/week
  8. Product revenue: actual versus planned revenue by channel, region, market segment
  9. Product profit: revenue and expense by product, net profit, or contribution
  10. Market share: graph trends over multiple years, market share by key players in your segment
  11. Marketing programs: lead quality (leads to close ratio), ROI for marketing programs, cost per lead closed

Enterprise Resource Planning

Reduction of operational problems:

  1. Number of problems with customer order processing
  2. Percentage of problems with customer order processing
  3. Number of problems with warehouse processes
  4. Number of problems with standard reports
  5. Number of problems with reports on demand

Availability of the ERP system:

  1. Average system availability
  2. Average downtime
  3. Maximum downtime

Avoidance of operational bottlenecks:

  1. Average response time in order processing
  2. Average response time in order processing during peak time
  3. Average number of OLTP transactions
  4. Maximum number of OLTP transactions

Actuality of the system:

  1. Average time to upgrade the system release levels behind the actual level

Improvement in system development:

  1. Punctuality index of system delivery
  2. Quality index

Avoidance of developer-bottlenecks:

  1. Average workload per developer
  2. Rate of sick leave per developer
  3. Percentage of modules covered by more than two developers

Project Management

CATEGORY MEASUREMENT (HOW) METRIC (WHAT)
Costs Actual vs. budget • Labor (costs)
• Materials (hardware/software)
• Other (office space, telecom)
Schedule Actual vs. planned • Key deliverables completed
• Key deliverables not completed
• Milestones met
• Milestones not met
Risks Anticipated vs. actual • Event (actual occurrence)
• Impact (effect on project)
Quality Actual vs. planned activities • Number of reviews (peer, structured walkthrough)
• Number of defects (code, documentation)
• Type of defect (major/minor)
• Origin of defect (coding, testing, documentation)

Software Maintenance

table0048

General IT Measures

CATEGORY FOCUS PURPOSE MEASURE OF SUCCESS
Schedule performance Tasks completed vs. tasks planned at a point in time. Assess project progress. Apply project resources. 100% completion of tasks on critical path; 90% all others
Major milestones met vs. planned. Measure time efficiency. 90% of major milestones met.
Revisions to approved plan. Understand and control project “churn.” All revisions reviewed and approved.
Changes to customer requirements. Understand and manage scope and schedule. All changes managed through approved change process.
Project completion date. Award/penalize (depending on contract type). Project completed on schedule (per approved plan).
Budget performance Revisions to cost estimates. Assess and manage project cost. 100% of revisions are reviewed and approved.
Dollars spent vs. dollars budgeted. Measure cost efficiency. Project completed within approved cost parameters.
Return on investment (ROI). Track and assess performance of project investment portfolio. ROI (positive cash flow) begins according to plan.
Acquisition cost control. Assess and manage acquisition dollars. All applicable acquisition guidelines followed.
Product quality Defects identified through quality activities. Track progress in, and effectiveness of, defect removal. 90% of expected defects identified (e.g., via peer reviews, inspections).
Test case failures vs. number of cases planned. Assess product functionality and absence of defects. 100% of planned test cases execute successfully.
Number of service calls. Track customer problems. 75% reduction after three months of operation.
Customer satisfaction index. Identify trends. 95% positive rating.
Customer satisfaction trend. Improve customer satisfaction. 5% improvement each quarter.
Number of repeat customers. Determine if customers are using the product multiple times (could indicate satisfaction with the product). “X”% of customers use the product “X” times during a specified time period.
Number of problems reported by customers. Assess quality of project deliverables. 100% of reported problems addressed within 72 h.
Compliance Compliance with enterprise architecture model requirements. Track progress towards department-wide architecture model. Zero deviations without proper approvals.
Compliance with interoperability requirements. Track progress toward system interoperability. Product works effectively within system portfolio.
Compliance with standards. Alignment, interoperability, consistency. No significant negative findings during architect assessments.
For website projects, compliance with style guide. To ensure standardization of website. All websites have the same "look and feel."
Compliance with Section 508. To meet regulatory requirements. Persons with disabilities may access and utilize the functionality of the system.
Redundancy Elimination of duplicate or overlapping systems. Ensure return on investment. Retirement of 100% of identified systems
Decreased number of duplicate data elements. Reduce input redundancy and increase data integrity. Data elements are entered once and stored in one database.
Consolidate help desk functions. Reduce $ spent on help desk support. Approved consolidation plan by fill-in-date
Cost avoidance System is easily upgraded. Take advantage of e.g., COTS upgrades. Subsequent releases do not require major “glue code” project to upgrade.
Avoid costs of maintaining duplicate systems. Reduce IT costs. 100% of duplicate systems have been identified and eliminated.
System is maintainable. Reduce maintenance costs. New version (of COTS) does not require “glue code.”
Customer satisfaction System availability (up time). Measure system availability. 100% of requirement is met (e.g., 99% M-F, 8 a.m. to 6 p.m., and 90% S&S, 8 a.m. to 5 p.m.).
System functionality (meets customer’s/user’s needs). Measure how well customer needs are being met. Positive trend in customer satisfaction survey(s).
Absence of defects (that impact customer). Number of defects removed during project life cycle. 90% of defects expected were removed.
Ease of learning and use. Measure time to becoming productive. Positive trend in training survey(s).
Time it takes to answer calls for help. Manage/reduce response times. 95% of severity one calls answered within 3 h.
Rating of training course. Assess effectiveness and quality of training. 90% of responses of “good” or better.
Business goals/ mission Functionality tracks reportable inventory. Validate system supports program mission. All reportable inventory is tracked in system.
Turnaround time in responding to congressional queries. Improve customer satisfaction and national interests. Improve turnaround time from 2 days to 4 h.
Maintenance costs. Track reduction of costs to maintain system. Reduce maintenance costs by 2/3 over 3-year period.
Standard desktop platform. Reduce costs associated with upgrading user’s systems. Reduce upgrade costs by 40%.
Productivity Time taken to complete tasks. To evaluate estimates. Completions are within 90% of estimates.
Number of deliverables produced. Assess capability to deliver products. Improve product delivery 10% in each of the next 3 years.

Business Performance

  1. Percentage of processes where completion falls within ±5% of the estimated completion
  2. Average process overdue time
  3. Percentage of overdue processes
  4. Average process age
  5. Percentage of processes where the actual number of assigned resources is less than the planned number of assigned resources
  6. Sum of costs of “killed”/stopped active processes
  7. Average time to complete task
  8. Sum of deviation of time (e.g., in days) against planned schedule of all active projects
  9. Service level agreement (SLA)—Key performance indicators (KPIs)

SLA Performance

  1. Percentage of service requests resolved within an agreed-on/acceptable period of time
  2. Cost of service delivery as defined in SLA based on a set period such as month or quarter
  3. Percentage of outage (unavailability) due to implementation of planned changes, relative to the service hours
  4. Average time (e.g., in hours) between the occurrence of an incident and its resolution
  5. Downtime: The percentage of the time that service is available
  6. Availability − the total service time = the mean time between failure (MTBF) and the mean time to repair (MTTR)
  7. Number of outstanding actions against last SLA review
  8. The deviation of the planned budget (cost) is the difference in costs between the planned baseline against the actual budget of the SLA
  9. Percentage of correspondence replied to on time
  10. Percentage of incoming service requests of customers that have to be completely answered within x amount of time
  11. Number of complaints received within the measurement period
  12. Percentage of customer issues that were solved by the first phone call
  13. Number of operator activities per call—maximum possible, minimum possible, and average. (e.g., take call, log call, attempt dispatch, retry dispatch, escalate dispatch, reassign dispatch)
  14. The number of answered phone calls per hour
  15. Total calling time per day or week.
  16. Average queue time of incoming phone calls
  17. Cost per minute of handle time
  18. Number of unresponded e-mails
  19. Average after call work time (work done after call has been concluded)
  20. Costs of operating a call center/service desk, usually for a specific period such as month or quarter
  21. Average number of calls/service requests per employee of call center/service desk within measurement period
  22. Number of complaints received within the measurement period

Service Quality Performance

  1. Cycle time from request to delivery
  2. Call length—the time to answer a call
  3. Volume of calls handled—per call center staff
  4. Number of escalations—how many bad
  5. Number of reminders—how many at risk
  6. Number of alerts—overall summary
  7. Customer ratings of service—customer satisfaction
  8. Number of customer complaints—problems
  9. Number of late tasks—late
  10. Efficiency—KPIs
  11. The following are KPI examples indicating efficiency performance:
  12. Cycle time from request to delivery
  13. Average cycle time from request to delivery
  14. Call length
  15. Volume of tasks per staff
  16. Number of staff involved
  17. Number of reminders
  18. Number of alerts
  19. Customer ratings of service
  20. Number of customer complaints
  21. Number of process errors
  22. Number of human errors
  23. Time allocated for administration, management, training
  24. Compliance—KPIs

Compliance Performance

  1. Average time lag between identification of external compliance issues and resolution
  2. Frequency (in days) of compliance reviews
  3. Budget—KPIs
  4. Sum of deviation in money of planned budget of projects
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset