8
MEASURE

“Not everything that can be counted counts, and not everything that counts can be counted.”

Albert Einstein

Karen was focused on burndown charts and tracking the completion of user stories in each sprint as a software development project manager. Her manager kept asking her how many stories they had completed. There were high hopes that after they had adopted agile, things would be better. This time, they seemed to be on track, but at the last minute, they had to pull the plug and could not go live for a crucial feature the customer was expecting.

Kumar is under pressure to reduce the number of bugs. His manager had made it clear that all eyes are on the bugs and we need to get them down. Kumar has been working hard and putting in a lot of extra hours, but the bug list doesn't seem to shrink. He is wondering if he is destined to fix bugs for life and what he can do about it.

Ken led the PMO for over two years in a healthcare organization. He felt a sense of pride with all the PMO team's accomplishments, and he thought he had the metrics to prove it. When I talked to him, he described how they had implemented a standard project management methodology with 90 percent compliance. They had provided standardized training to 100 percent of project managers, out of which 88 percent had gotten certified. The project success rate had increased, and 85 percent of their projects were on time and budget. And yet he wondered why the executives continued to challenge and question the PMO's value and did not care about what they had accomplished.

WHY DOES MEASUREMENT CONTINUE TO BE ELUSIVE AND CHALLENGING?

The above scenarios highlight the prevalent enigma of measurement. Even though we are in the age of big data, machine learning, analytics, and algorithms and sophisticated measurement systems with key performance indicators, scorecards, and dashboards, measure remains an elusive and challenging element of the DNA of strategy execution. Today, we have the capability to measure everything. The increasing type, volume, and frequency of data streams we are involved in creating or reviewing is drowning and dumbing us into busy work, and we barely have a moment to look up from our devices and ask what's the point? How could Karen, Kumar, and Ken in the above scenarios, develop measure intelligence to come up with better measures to drive the right behavior and desired outcomes?

“That which is measured improves, but if you are measuring the wrong thing, making it better will do little or no good.”

Michael Hammer

Management educator and MIT professor Michael Hammer summarized the challenges in an MIT Sloan Management Review article, “7 Deadly Sins of Performance Measurement,” in 2007:

  • Vanity. Using measures that will inevitably make the organization, its people, and especially its managers look good. Just because it is being measured doesn't mean it is important. There could be a lot of other things that are important but are not being measured.
  • Provincialism. Siloed approach to measurement. Letting organizational boundaries dictate performance measures leads to sub‐optimization and conflict. Creating competition among departments seems compelling, but strong incentives tied to strong metrics force people to concentrate on just one part of the work, neglecting other contributing factors needed to achieve a goal.
  • Narcissism. Measuring from one's own point of view rather than from the customer or stakeholder's perspective.
  • Laziness. Measure what they have always measured, rather than go through the effort of ascertaining what is truly important to measure. Project management is particularly guilty of this, as we continue to focus on triple constraint metrics because projects have always been measured that way.
  • Pettiness. Measuring only a small component of what matters, for example, just tracking the number of bugs instead of the overall quality system.
  • Inanity. Implementing metrics without considering the consequences on human behavior and performance. People learn to game and manipulate metrics. A common example is using call duration to measure the performance of customer service reps, which drives the unintended behavior to rush through calls, instead of taking time to solve the customer's issue.
  • Frivolity. Arguing about metrics and looking for ways to pass the blame to others rather than shouldering the responsibility for improving performance.

These measurement sins are still pervasive even with today's sophisticated measurement collection and reporting systems. In some instances, measurement malaise is worse today in a world of page views, clicks, and likes, where these measures are manipulated, misinterpreted, and drive the wrong behavior. The pursuit of meaningful measurement remains elusive because measurement is not easy—deciding what to measure, how to measure, what behaviors our measures will drive, how to analyze and interpret the data, what the unintended consequences will be, and how these measurements will be manipulated are all tough to decipher.

Measuring project/program and PMOs is fraught with its own challenges. Project measurements have been bound within the triple constraints of cost, time, and scope for a long time. The question is, do these metrics promote the right behaviors and outcomes? Or are they harmful, as the project manager is driven to rush the completion of a milestone to meet the deadline, only to find out the customer cannot use it?

Typically, measurement approaches are rooted in the machine‐oriented, industrial, cause–and‐effect, linear paradigm, as we have been discussing in this book. The problem with that, however, is that today's project environments are more like complex ecologies in which it is hard to pin down a direct cause and effect. As a result, some of the benefits are nonlinear and unpredictable and may be hard to trace and attribute to specific initiatives.

To capture intangible value, estimates of measurements are often based on assumptions and calculation that could be questioned. Let's assume your annual budget is $100 million for projects. Planned revenue is $500 million and 50 projects are planned to be delivered. A PMO is implemented with the aim of achieving project efficiencies and increased project productivity. At the end of the year, you deliver 60 projects. The actual cost is still $100 million, but actual revenue is now $550 million, an additional $50 million in revenue! The challenge in this example is to isolate PMO value and attribute the benefit of additional revenue to the PMO. This example is based on several assumptions that could be questioned: that we could do more projects because of efficiencies brought by the PMO alone, that the basis of calculating additional revenue brought by 10 additional projects is legitimate, that the organization has a good handle on its portfolio, that the project and revenue information is based on reliable and accurate information, and so on.

What gets measured is what's easy to measure, not necessarily important. As Dan Ariely points out in his book Payoff: The Hidden Logic That Shapes Our Motivations, organizations overemphasize the countable dimension. Following the principle of looking for your keys under the street lamp, managers are drawn to the subset of tasks that are easily measurable. Consequently, they overemphasize those parts of the job and divert attention and effort away from the uncountable dimension. The other mistake is to treat the uncountable dimension as if it were easily countable. In fact, reducing the intangible to something simplistic and countable often misses the point, for example, measuring the number of reports produced and accessed, rather than the quality or the benefits of the report.

The difficulty of capturing intangible value and distrust of soft measures often forces organizations to measure what can be measured, rather than what should be measured. They rely more on traditional metrics that are based purely on financial or hard measures. Some organizations with a heavy financial focus ban intangible metrics from consideration. This leads to a short‐sighted approach to measurement and does not capture the enabling and indirect value. Unfortunately, these measures often drive the wrong behaviors and promote gaming and adjustments to meet required targets.

A common challenge associated with selecting, collecting, and communicating measures is the ability to document, record, and capture good data, the reliability of systems, and the accuracy of the information they generate. Benefits measurement data is not easily available or imprecisely defined and collected and communicated haphazardly. Some organizations with a heavy measurement culture collect all kinds of metrics. The problem is that they may not necessarily connect or convey meaningful information in a coherent way.

Stakeholders and beneficiaries of the PMO can also be challenging, as they may not be sure about the real purpose of the PMO or what they want from the PMO. In this sense, they are also like typical customers who are guilty of changing their mind of what constitutes value to them. A chief information officer that we worked with complained that the portfolio management process that the PMO had been implementing over the last year, of which he was a big proponent, was not adding value. He was considering disbanding the process because it did not solve his problem of getting additional resources from the steering committee. He was no longer sure of the purpose of the PMO anymore. He was not willing to acknowledge the broader impact and the intangible value of better relationships with the business customers among many things this PMO process was enabling.

PMO value is invisible, which makes the stakeholders forgetful of how things have changed and unlikely to attribute it to the PMO. Like water to the fish or air to humans, the PMO's value is transparent and often taken for granted.

The above factors highlight the challenges with measurement and why effective measurement remains elusive. Everyone tries to measure; some measure too much, some too little. What gets measured doesn't matter, or worse, drives the wrong behavior. To measure effectively you must decode, and work on the DNA strands of measurement.

THE DNA STRANDS OF MEASURE

Figure 8.1 lists the DNA strands of measure.

Schematic illustration showing the DNA strands of measure.

Figure 8.1: DNA Strands of Measure

Objectives

To start with, you have to know how to define success. Objectives help to define success. Effective objectives depend on two aspects: questioning the clarity of purpose and perspective of measurement.

Purpose

Remember Karen from the beginning of the chapter, she was all excited about her agile metrics and burndown charts, and her manager kept asking, how many stories have we completed? You can see how there is nothing wrong with the measures themselves, but they did not measure intelligently. The measures did not help them to go live on time. The customer was not happy, and they could not bill and realize the revenue in this quarter. The question of how many stories were completed limited Karen's view. Questions limit our frame and focus of what we notice and measure. If Karen's manager was clear on the definition of success and purpose, he should have asked, “How many stories can we release?” which assumes that the testing and quality assurance has been completed and the production environment is ready. A better question might be, “How many stories are we happy to release? Or, even better, “How happy and satisfied will the customer be with this release?” or “How much value will the customer experience with this release?” The quality of your question will help clarify the purpose and define the effectiveness of your measure.

In project management, there is a limited view of measurement—primarily to monitor, track and report status, and sometimes to forecast and predict the status of deliverables and outputs. The purpose of measurement is not just to measure but to provide feedback for improvement. If the purpose is clear, it will spark the crucial questions. Effective measurements should help:

  • Influence behavior and drive performance toward desired outcomes and results
  • Provide feedback to adjust and adapt
  • Enhance customer satisfaction and experience
  • Set program/project priorities
  • Allocate resources effectively
  • Gain insights
  • Make better decisions and prioritization
  • Provide transparency
  • Communicate and inspire action
  • Demonstrate benefits and value
  • Illuminate impact

Perspective—Whose Perspective Are We Measuring From?

If the customer is the center of the universe and the purpose of business is to create customers, we must question whether our measures are customer‐ or stakeholder‐centric (outside‐in) or self‐centric (inside‐out). We tend to measure things that are important to us, like internal process efficiencies, versus customer success, or what matters to them. While you grapple with measures like customer satisfaction, customer success, and customer loyalty, which might all lead to a higher net promoter score, it all depends on what the customer perceives as value. That's why before selecting measures we must define and understand value.

Merriam‐Webster's dictionary defines value as fair return or equivalent in goods, services, or money for something exchanged, relative worth, utility, or importance. We need to emphasize relative worth, perceived value, from the receiver's standpoint. The ultimate measure of the value is the sum of the benefits that accrue to each of the internal stakeholders from the stakeholder's perspective.

Even though we may know this definition and understand it intellectually, in practice the tendency is to focus on the first part and often forget the stakeholder's perspective. Efforts to show project and PMO value fall into this trap and present value inside‐out from the PMOs perspective, instead of outside‐in from the stakeholders' point of view. So when PMO leaders and team members are asked to show how they are providing value they enthusiastically rush to show and tell all the accomplishments from their own perspective, such as how many processes they have rolled out, how extensive their PM methodology is, or how many project reviews they have conducted, without considering if their stakeholder's care about any of these things.

Ken was proud of his PMO accomplishments and thought he had the measures to prove it. He was surprised to find that the executives didn't care and continued to question the PMO value. After we worked with him and helped the PMO implement some of the points made in this chapter, the light bulbs went on and they could see why they were struggling. The measures that they were reporting were PMO‐centric and not customer or stakeholder‐centric. Nobody outside the PMO cared about how many people complied with the methodology or how many people were trained, even though they knew there could be a correlation to project success. The PMO did not discuss and establish the definition of success or PMO value. It did not attempt to measure and illuminate business value.

Value, like beauty, is in the eye of the beholder. The PMO may think it is providing great utility, but do the stakeholders or the receivers think so? Usefulness must mean something to the receiver, not the provider. That is why in the pursuit to demonstrate value we must always start with the satisfaction of stakeholders, beneficiaries, and end users of PMO activities and services. Understanding satisfaction is tricky because it is based on expectations and perceptions, which can be fuzzy, shifting, and elusive. Ultimately, satisfaction depends on stakeholders' perception minus their expectations. Perceptions have to be higher than expectations for a greater degree of stakeholder satisfaction. If expectations are high and perception is low, that is obviously a problem. For example, if the PMO is sold as the panacea to all project‐related issues and six months into the PMO implementation there is some progress, but projects are still having problems, PMO satisfaction is going to be low despite improvements in some areas. Expectations and perceptions need to be managed by identifying what is important to the stakeholders. Any efforts to show value have to start by stepping into the shoes of the stakeholders and feeling their pain and focusing on how that can be addressed to their satisfaction. Some of the stakeholders, beneficiaries, or customers of the PMO may include project managers, functional managers, end customers, senior management, business partners, vendors, and contractors, among others.

One of the themes emphasized in this book is customer experience, and one of the measures for customer experience management is Net Promoter Score®(Figure 8.2). For more information, refer to https://www.netpromoter.com/know/.

Schematic illustration of the Net Promoter Score.

Figure 8.2: Net Promoter Score

The combination of clear purpose and right perspective leads to a better definition of success with specific objectives.

Key Results

Objectives help to define success, and key results help to measure it. Key results focus on measurement. This strand addresses what to measure. What are the consequences of what we measure? How do we measure?

The biggest challenge is deciphering what to measure. Finding the right measures and metrics and the related algorithm is the key to effective strategy execution. What you measure puts the spotlight on that factor, but how do you know if that is the right thing? What about the things you are not measuring? Measurements drive behavior, and every measurement has a consequence. To select effective measures and metrics, start by questioning: Do current measures describe relevant current and past status and performance data? Do current measures help predict and forecast future outcomes and results? What leading metrics will help drive future performance and results? How can we link and balance metrics from all elements of the DNA of strategy execution? Do current measures from other areas of the DNA cascade and align to strategy? Do we balance between output and outcome measures? What is the key measure and related algorithm that can have a core impact on performance and results? What is the overall impact of our measures?

“We tend to overvalue the things we can measure and undervalue the things we cannot.”

John Hayes

Are You Ben or BoB?

What you measure depends on your measurement mindset or persona, whether you are like Ben or BoB? Let's meet Ben first. He is a by‐the‐book project manager, and when you ask him about the status of his project, he is ready with his project measures and will rattle off… “the cost performance index (CPI) is 0.96, and the schedule performance index is 1.06.” BoB is another project manager, and his response is different, “Just met with the customer and the project is going to be delayed, but we uncovered a key customer functionality, and we can get it done, and this will have a positive impact on our revenue, about 15 percent better than we had projected.” Ben is focused on output measures; BoB is focused on outcomes. Ben is caught up in measuring process and activities, and Bob is targeting results. Ben stands for the Benefit and BoB stands for the Benefit of the Benefit.

Ben is limited to the benefit of the output, whereas BoB shows the real value related to strategic outcomes. For example, project measures confined to time, cost, scope or quality are Ben (output) measures. Whereas return on investment, time to market, or customer satisfaction measure what are Bob (outcome) oriented. Similarly, in the PMO realm, emphasizing the benefit of increased compliance, or increased number of trained, certified project managers, without showing how that translates into greater project success or strategic benefit for the organization is limiting and does not highlight business value. Other examples include measures such as the number of visitors or mobile app downloads that are Ben‐oriented, versus real‐value BoB‐oriented metrics, like retention percentage, monetization, and net promoter score (NPS).

You can channel Ben and BoB by focusing on the distinction between outputs and outcomes. Outputs are the deliverables or products/services like PM methodology is a product or service of the PMO. Outcomes, on the other hand, are the success criteria or the measurable result of successful completion of the output. For example, 100 percent consistency in projects due to the standard use of the PM methodology. All too often, there is a great deal of emphasis on collecting outputs, and not much emphasis on outcomes. In fact, outputs by themselves may have no intrinsic value, unless they can be converted to outcomes. Outcomes can be further classified as benefits, which are the measurable desired results of the outcomes. One good test is for each measure to complete the sentence, “This ____ (output), will result in _____ (outcome).”

Ben and BoB mindsets apply to many things in life. Many people are limited with the Ben mindset and can't think beyond outputs and seem to be satisfied with it. The people and organizations with the BoB mindset go farther. Ben is satisfied with the near term, what's in view, within reach, easily measurable benefit, which is often limited.

Ben is worried about the form; BoB is after the essence. While Ben is busy measuring and analyzing the finger pointing to the moon, BoB is calculating what needs to be done to reach the moon. BoB's approach is simple and strategic—how to move the needle and widen the wedge (discussed in Chapter 4). He does that by focusing on three things: how we can keep or create more customers (increased revenue), save money (reduce costs), and not get in trouble (cost avoidance). BoB frames or translates everything Ben executes in one of these three aspects.

The question is, who do you want to be like, Ben or BoB? A quick response might be, “more like BoB,” but as you reflect on the nuance, you realize you can't get to BoB without Ben. Ben is execution oriented, and BoB is strategy focused. Neither is better than the other; we need both. If you are more Ben oriented, you need to start to zoom out and challenge yourself to see the world through BoB's eyes. If you are BoB, you have to understand you cannot realize your strategy without Ben and develop some Ben capabilities or make sure you have some Bens on your team. The ideal is BobbyBen, who is bimodal and balanced and can leverage both aspects to achieve desired results and make an impact on strategic execution.

Table 8.1 lists measures that will help to strike a balance between different dimensions of measures.

Table 8.1: Outputs versus Outcomes

Outputs (Ben) Outcomes (BoB)
Tangible (hard) Intangible (soft)
Direct Indirect
Short‐term Long‐term
Quantitative Qualitative
Objective Subjective
Content‐specific Context‐specific
Lagging Leading
Operational Strategic
Procedural Behavioral

In developing a holistic approach to measuring and showing value, the challenge is to be able to include the opposing dynamics while striving to strike a balance between different types of measures. Often, only one side of these factors is taken into consideration. Many perspectives should be considered in developing a balanced approach to measurement.

The matrix in Figure 8.3 illustrates classification and examples of different types of measures and metrics and the need to balance.

Schematic illustration of the classification and examples of different types of measures and metrics.

Figure 8.3: Different Types of Measures

Another aspect that should be considered in selecting metrics is the importance of the end‐to‐end result instead of a small part of the process. In their book, Implementing Lean Software Development, Mary and Tom Poppendieck describe this as optimizing the whole, which means ensuring the metrics in use do not drive suboptimal behavior toward the real goal of delivering useful software.

Decoding the Algorithm that Leads to Desired Results: Focusing on What Influences and Drives Ben and BoB

It is important to balance Ben and BoB measures, but you have to hope or pray that you achieve them. Ben measures like on‐time or under‐budget delivery, or BoB measures like revenue, margin, and profitability are the goals or objectives of what to achieve. You can set the target, but you don't know if you will achieve it, and by the time you measure and find out it is already late, as these are in the past or lagging measures. This is the difference between lag and lead measures. You should also focus on how what you measure can influence Ben and BoB and what measures will drive their behavior toward achieving the objective. The key is to identify the leading measures that influence the achievement of Ben and BoB. You must ensure not just that you measure the right thing but that your measures are influencing the right behaviors.

Lag indicators are easy to measure but hard to improve or influence, while leading indicators are typically hard to measure and easy to influence. Lead indicators are more difficult to determine than lag indicators. A good test is to ask, does the measure allow you to influence the result? This is also the secret to decoding the measure and related algorithm that can lead to desired results.

The challenge is to identify the key measure that will influence the right behavior; for example, in running, instead of trying to focus on the goal of a seven‐minute mile, measuring cadence—steps per minute. Or in basketball, instead of focusing on making 10 shots in a row, focus on keeping the elbows in. If you are trying to lose weight, instead of focusing on number of pounds, focus on number of calories consumed per day or number of workouts per week. The key is to make sure it is a specific measure that is predictive. For example, instead of improve concentration, the measure should be number of passes tossed correctly.

Finding the right measures and metrics decoding the algorithm that leads to success is not easy but should be the top priority of executives and managers at every level.

Once the right measure is identified, create a key measure profile for each measure as illustrated in Figure 8.4 to bring more rigor and credibility to the process.

Measure Effective Date Required PM Framework Project Completion Criteria Updated 7/1/0XX
Measure Owner
Business Objective
Metric (Supported Key Success Factor) Consistent utilization of PM Framework (PMF) Methodology
Measure Definition Percentage of projects with completed PMF criteria
Frequency of Measurement Monthly
Unit of Measure %
Calculation Total number of projects with completed criteria divided by Total number of projects sampled multiplied by 100
Polarity Higher is better
Data Sources
  • PMF Logs
  • Project Reports
  • PMO Reports
  • Project Documentation
Data Collection Process Randomly select specified documentation from 50% of active projects and perform measure calculation
Data Collector
Performance Baseline TBD; Collect baseline data using results from July, August, and September samples.
Performance Target 100%
Performance Enablers Utilizing most recent version of deliverables/templates and continuously working the documents on a monthly basis, education and documented process

Figure 8.4: Key Measure Profile

Measurement rigor adds overhead, which needs to be balanced with the cost of precision, accuracy, robustness, and the value of the metric.

Once you know what to measure, the next challenge is, how do we measure? Do you have the right systems, tools, apps, or sensors to collect the data? We can get all enamored with the colorful dials, dashboards, and scorecards of measurement systems, including project portfolio management (PPM) systems. The challenge is that they are still dependent on humans to enter the data to provide meaningful information. These systems can't capture what is truly going on because of missing, incorrect, or nuanced information. In a world of wearables and sensors, the tools and apps are getting better at collecting and tracking data, but the PMO has still a lot of work to do to ensure the timely input and quality of data.

Reporting: Presentation and Communication

How Do We Present and Communicate the Measures and Metrics to Influence Desired Action?

To make sense and mean something, measures are presented and communicated in reports, dashboards, and scorecards. They tend to show current values of a few metrics taken out of context with little or no history. While well‐meaning, they are often misinterpreted and confusing. It is hard to decipher the signal from the noise. The renowned guru of information design, Edward Tufte, explains, “Clutter and confusion are failures of design, not attributes of information. There is no such thing as information overload.” He advises, “The purpose of analytical displays of evidence is to assist thinking. Consequently, in constructing displays of evidence, the first question is, what are the thinking tasks that these displays are supposed to serve?”

Traffic Lights

A common staple of communicating status in the project world is traffic light dashboards. Do you use a traffic light approach to communicate project status? Do you get frustrated at times that the color does not necessarily represent the true status of your project? Do you spend a lot of time defending the color, and find that even after detailed explanation the status is misunderstood by your project stakeholders? You are not alone.

Traffic light project status reports are popular with senior managers and executives, but they are also a source of misinterpretation, confusion, and frustration. Red, amber/yellow, and green—also known as RAG—reports are widely used because they are a simple, visual way to communicate project status. Just like traffic lights on the street are wired to a timing mechanism that causes the light to change color, the status reports should be wired to objective criteria. For example, if a project is within a 5 percent variance, it may be green; between 5 percent and 10 percent, yellow; and greater than 10 percent, red.

Unlike street lights, though, project status lights are often misinterpreted. Typically, nobody likes red status on a project, taken out of context; it is interpreted as bad, so everybody wants to see green. But contrary to conventional thinking, red may not necessarily be bad and green may not necessarily be good for your project. Red may mean that there are some changes or issues caused by customers or stakeholders that have pushed the project past the variance threshold. Despite this, it may be good for the overall achievement of the project objectives. It may mean that the project manager is doing a good job of engaging and listening to the stakeholders, or that the project team is not complacent.

On the other hand, green may be a symptom that the project team is narrowly focused and is underachieving the project goals. It can also signal that the project is being re‐baselined frequently. A prolonged green status can also make the project team too comfortable, and not prepared for unforeseen risks and issues.

Because of these misconceptions, traffic light status reports are prone to be gamed by project managers, who learn to manipulate the colors to fit their management's expectation and organizational culture. There can also be mistaken expectations, since, while traffic lights change in a linear, predictable pattern, projects can change state at random, and jump from green to red in an unpredictable way, often without warning.

It may be a good idea, then, to try a different approach, even if you find it difficult to abandon the traffic light reports because senior managers are used to them. Instead of traffic light colors, use rich data and numbers with sparklines (a very small line chart, typically drawn without axes or coordinates, made popular by Edward Tufte) and patterns to communicate status.

If you are compelled to use traffic lights, here are some tips to make your status reports more effective:

  • Calculate the overall health of the project with a balance between objective measures like schedule variance and cost variance, and subjective measures like number of issues and stakeholder engagement.
  • Periodically review and recalibrate the variance thresholds that trigger project status change.
  • Provide a balanced perspective by using two lights—one based on the objective calculation of defined thresholds, and the other a subjective light based on what the project manager feels is the true status of the project.

Context, Contrast, and Causality

Practice and clarify the 3 C's—context, contrast, and causality—in your presentation medium to raise the communication value of your measurement. As you saw in the traffic light example, the color without context can be misinterpreted. To provide greater context, use comparison and contrast. For example, this project is over budget, but compared to project Y, this is 5 percent less, or compared to last year we are 7 percent ahead. Misattributing causality and linkage is a common flaw in communicating and interpreting metrics. Clarify and communicate, and emphasize and underscore whether there is linkage or not. Recent project delays are not linked to the new PMO tool or cost savings due to new governance process.

Information Radiators

Agile uses the term information radiator (coined by Alistair Cockburn) as a generic term to promote transparency of information and reporting in an agile environment. It can include any number of handwritten, drawn, printed, or electronic displays in a highly visible location so that all team members and stakeholders can see or access the latest information at a glance: burndown charts, count of automated tests, velocity, incident reports, continuous integration status, and so on. Information radiators promote a sense of responsibility and ownership among the team members. Based on the premise that the team has nothing to hide from itself, or customers and stakeholders, information radiators tend to provoke conversation, to openly confront and solve problems.

Action—What Can We Do with the Measures? Are the Measures Actionable?

While there are myriad measures and metrics you can collect, track and analyze, but the only ones that are consequential are the ones that inspire action. Often, measures have the opposite effect, as we are numbed by the numbers and don't know what to do. Although the various dials and meters look interesting in the report, they are not actionable. Measures are useful only if they are used for actionable treatment, rather than just for autopsy.

Here's an experiment you can try to find out if your metrics are actionable or not. If you or your PMO is responsible for preparing and publishing a bunch of reports, pick a couple and stop sending them. If nobody complains, you know they were inconsequential and not needed.

To be actionable, it must be a leading metric that has a direct impact on performance and can be influenced. For example, for one software development team, the number of stories accepted, became a key measure that had an overall impact. They changed their behavior to have more interactions with the customer, uncover hidden needs, and focus on acceptance, not just completion of story points.

Learning—Are We Getting Effective Feedback? Are We Learning and Adjusting?

The measure that keeps Kumar (from one of the opening scenarios in this chapter) up at night is the number of bugs fixed. It seems like a never‐ending cycle as he fixes a few, and finds more added to the list. How can Kumar get out of this downward spiral? The number of bugs measured is trolling him in a repeated loop, without questioning what the goal is and why the bugs are occurring in the first place? He does the something over and over with bug fixes with little hope of getting out of the loop. The measure is not implemented intelligently with a complementary measure of reduction in number of bugs, rather than bug fixes alone. To sharpen measure intelligence Kumar needs to analyze the measure and the underlying forces. He can apply double‐loop learning, which entails the modification of goals or decision‐making rules in the light of experience. The first loop uses the goals or decision‐making rules, and the second loop enables their modification—hence “double‐loop.” According to Chris Argyris, Professor Emeritus, Harvard Business School, double‐loop learning recognizes that the way a problem is defined and solved can be a source of the problem

The purpose of measurement is not just to measure but to learn and improve. Improve both results and measures themselves to achieve the desired impact. That means understanding when measures are no longer relevant, replacing them, or dropping them as progress is made toward the goal, or the environment has changed. Chapter 10 delves deeper into different aspects of learning and feedback.

DEFINING AND MEASURING PROJECT SUCCESS

The traditional measure that project managers are expected to manage is the triple constraint. They are often compelled to live in this triangle of time, cost, scope, and quality. The initial idea of the triple constraint was a framework for project managers to evaluate and balance these competing demands. It became a way to track and monitor projects. Over time, it has also become a de facto method to define and measure project success. While the triple constraint is necessary, it is not enough. Projects that are delivered on time, within budget, and meet scope specifications may not necessarily perceived to be successful by key stakeholders.

Besides time, cost, scope, and quality, what are other criteria for project success in your organization? We have surveyed this question of various project stakeholders over the past 10 years and repeatedly heard that ultimately the factors contributing to project success include:

  • Stakeholder and customer satisfaction
  • Meeting business case objectives
  • Time to value
  • Customer/end‐user adoption
  • Customer experience
  • Quality
  • Meeting governance criteria
  • Benefits realization
  • Overall value and impact

As you already recognized, time, cost, scope, and quality are related to project outputs (Ben), whereas these other factors are related to business outcomes (BoB). While the triple constraint is important, it can also narrow the focus away from other crucial factors that lead to project success. Based on today's project environments, project managers need to broaden their perspective to include other criteria to satisfy stakeholders and deliver business results.

How do we rethink the triple constraint? We have posed this question and conducted a number of exercises on this topic. Following are three overlapping perspectives that should be reviewed and discussed as you come up with ways to define and measure success:

  1. Mirror project outputs with business outcomes (Figure 8.5). While focusing on each of the triple constraints, the project manager has to reflect and make project decisions based on the achievement of the corresponding business outcome. Cost and time focus must optimize business benefits like return on investment (ROI), net present value (NPV), and so on, and benefits of faster delivery or time‐to‐market. The scope should mirror end‐user adoption, and overall quality must be balanced with stakeholder/customer satisfaction.
  2. Parallel balance (Figure 8.6). This is a parallelogram view of balancing between scope and schedule in parallel with budget and benefits, or budget and scope in parallel with schedule and benefits. Benefits may include a combination of business objectives, end‐user adoption, customer satisfaction, and other criteria.
  3. From a triangle of constraints to a diamond of opportunity (Figure 8.7). The diamond combines the delivery focus of project outputs on one side, with strategic outlook of business outcomes on the other. The idea of the multiple sides of the diamond also helps the project manager to include multiple perspectives of focus that might be relevant to their business. You can select a combination of factors to measure that may be relevant to your business and industry. For example, in many organizations, a new criterion that is becoming a critical balancing factor similar to quality is health, safety, security, and environment (HSSE).
Schematic illustration showing the mirroring project outputs and outcomes.

Figure 8.5: Mirroring Project Outputs and Outcomes

Schematic illustration of a parallelogram view of parallel balance.

Figure 8.6: Parallel Balance

Schematic illustration of a diamond that combines the delivery focus of project outputs, with strategic outlook of business outcomes.

Figure 8.7: Diamond

If you are expected to focus on the triple constraint alone, expanding your focus to include additional criteria will give you an edge. You will be able to deliver projects that provide business benefits, and your projects will be perceived to be successful by your stakeholders.

What about agile? It is a little better, but agile metrics like burndown charts and velocity are also primarily output oriented. Agile turns the triple constraint upside down. The scope is not fixed at the start. You work with fixed time (iterations) and cost (resources) and adjust the scope. The goal is to prioritize and work on the customer's most important requirements within the budgeted cost and time. The good thing is that the customer is involved, and prioritization is based on business value. But you don't know if the customer will be satisfied or use it, and you don't know if you will achieve the business outcome. Unless you broaden the perspective and include leading outcome measures like adoption and experience that lead to business outcomes.

If you are thinking, “Wait, that is not the job of a project manager; projects deliver outputs and programs deliver benefits,” think again. This is a misconception, as we discussed in the beginning of this book, outputs and outcomes are both products of the same DNA of projects, programs, and portfolio. If anything, they should be identified and linked together, the breakdown and separation are what cause the mirror to crack further, and it is hard to put it back together. Many organizations even have a hard time distinguishing between projects and program. If the organization does not define program, does that mean it's projects cannot deliver any benefits?

Whether it is projects, programs, strategic initiatives, portfolio, or PMO, there should be an integrative approach to measurement, and you must start by defining success with your customers and stakeholders.

OBJECTIVES AND KEY RESULTS (OKRs)

Objectives and key results strands of measure are derived from the OKR technique, which is based on Peter Drucker's management by objectives (MBO) framework, repurposed, and tweaked to twenty‐first‐century organizational dynamics. First implemented at Intel by Andy Grove in the 1970s and later adopted by Google and other companies like LinkedIn, Oracle, Twitter, and many other Silicon Valley companies, OKRs track individual, team, and company goals and outcomes in an open and transparent way.

Objectives define the purpose and where you want to go in a set time frame. They are qualitative and describe the desired outcome. They should inspire action, so they should be inspirational with a stretch goal and actionable, which a person or team can execute. Key results add metrics to objectives, they are quantitative. They let you know if you are getting there, or how far you are from your goal.

OKRs are about focus, frequent feedback and response, stretch goals, transparency, and simplicity. As a result, insights, and improvements are easier to see and implement. OKRs promote a sense of ownership, as they are determined bottom‐up as opposed to key performance indicators (KPIs), which are top‐down. OKRs should be transparent so that everyone sees the bigger picture and can hold each other accountable.

Implementing OKRs:

  1. List three to four objectives you want to strive for.
  2. For each objective, list three to five key results to be achieved.
  3. Communicate objectives and key results to everyone. They should be easily accessible and transparent.
  4. Regularly update each result on a 0 to 100 percent scale.
  5. When objective's results reach 70 to 80 percent, consider it done. They are stretch goals and should be designed to challenge.
  6. Review OKRs regularly and set new ones. They should be set quarterly and reviewed monthly.

Example:

  • Objective: Improve PMO value and impact by the end of Q1
  • Key Result: Increased interaction with key customers and stakeholders—conduct survey with 85 percent or better response and 15 one‐on‐one interviews with key stakeholders
  • Key Result: Three enhancements and two new PMO processes implemented
  • Key Result: 90 percent adoption of PMO standards
  • Key Result: 7 percent project cost reduction due to greater standardization and consistency

DEVELOPING AN INTEGRATIVE APPROACH: STRATEGY EXECUTION MEASUREMENT FRAMEWORK

An effective measurement framework should focus on multiple dimensions and optimize the whole. For example, how do you know if you are covering all the critical areas when developing OKRs? You may unintentionally isolate some or overdose on others. Measurement should cover and balance all aspects of the DNA—Strategy, Execution, Governance, Connect, Measure, Change, and Learn. Also, how do you know if you are heavy on output (Ben) measures, versus outcome (BoB) measures, or how do you fine‐tune what lead measures influence Ben and what influences BoB.

The Strategy Execution Measurement Framework provides an integrative approach to develop metrics and enhances the practice and application of OKRs with a holistic perspective. It promotes a sense of ownership and linkage of results to outputs, outcomes, and impact. It is designed to ensure that you measure both, outputs and outcomes, and the leading measure to achieve them.

Start by collaborating with customers and stakeholders to define success and come up with OKRs. Whether it is a strategic initiative, program, project, or the PMO, define meaningful metrics by listing the organizational business objectives in the background, and follow these steps. Define:

  • 1.0 Business/Organizational objective
  • 1.1 Objective that needs to be achieved (what is being developed or worked on)
  • 2.1 Key result—outcome measure or metric for what you are trying to achieve (BoB) [Outcome Measure]
  • 2.2 Key result—lead measure or metric that will lead to desired outcome (BoB) [Lead Outcome Measure]
  • 2.3 Key result—output measure or metric for the deliverable or outputs you need to achieve the outcome (Ben) [Output Measure]
  • 2.4 Key result—lead measure or metric that will lead to desired output (Ben) [Lead Output Measure]
  • 3.0 Customer/Stakeholder impacted (who cares about this)

This approach is designed to make sure you breakdown key results into outcomes (BoB) and outputs (Ben) and what leads to both, without missing any one or overdosing on the other. The order in how you get to them might seem confusing, but once you start to think it through, you will find an approach that works best for you.

Here's an example.

Let's say the PMO has identified a pain point with no standardized project management approach. There is an existing PM process, but it is not effective. Along with the stakeholders, the PMO comes with an objective to implement an effective PM framework that has an impact within three months.

Typically, PMOs would mostly measure how many people are trained and certified in the new framework (Ben). They might be trained and certified, but that does not necessarily mean that they will use and apply the process. They might even apply the process but might not be aware of the purpose or outcome the PMO is trying to target (e.g., key result—outcome of 10 percent cost reduction; BoB).

The above approach helps to start with the question, why implement the PM framework? To achieve the key result of overall cost reduction by 10 percent due to increased standardization, consistency, and less wastage (outcome measure). To realize this, you have to measure the adoption and application (80 percent) or above of the new PM framework (lead outcome measure). This will not be possible unless you measure how many project managers are certified (85 percent) in the new framework (output measure). What will lead to certification is measuring the number of project managers (95 percent) trained in the framework (lead output measure).

Here are the steps:

  • 1.0 Business/Organizational objective—10 percent cost reduction
  • 1.1 Objective (PMO)—effective PM framework in three months
  • 2.1 Key result—outcome measure (BoB)—10 percent project cost reduction due to increased standardization, consistency, and less wastage
  • 2.2 Key result—lead outcome measure—80 percent adoption and application of PM framework templates
  • 2.3 Key result—output measure (Ben)—85 percent compliance with PM framework
  • 2.4 Key result—output lead measure—95 percent project managers and team members trained and certified in PM framework
  • 3.0 Customer/Stakeholder impacted—project managers, team members, sponsors, project customers, PMO

Different stakeholder may care about different measures, and focus on measures that they can target and improve.

To better understand the steps, Figure 8.8 provides a simple example. Imagine you just had a physical and you were diagnosed with hypertension or high blood pressure (BP), and you have to get in shape. Instead of arbitrarily thinking about exercising, or diet, how could you use better measures and OKRs to get in shape? You should discuss the objective with your doctor of achieving better health and a normalized blood pressure in three months. The key result you want to target is normal BP (BoB—outcome measure). What is going to lead to it? You have to make sure you measure and track your BP regularly (lead outcome measure). That is fine, but what will contribute to a lower BP? Losing weight by 20 pounds will help me get in shape and also impact my BP (Ben—output measure), and what will lead to losing weight? Calorie intake and/or walking and tracking number of steps per day (lead—output measure).

1. Objective 2.1 Key Result Outcome Measure (BoB) 2.2 Key Result Lead Outcome Measure (Leads to BoB) 2.3 Key Result Output Measure (Ben) 2.4 Key Result Lead Output Measure (Leads to Ben)
Better health (normal blood pressure) by focusing on weight loss in 3 months Normalized BP Daily monitoring & tracking BP Lose 20 lbs. Walk 10,000 steps per day

Figure 8.8: Better Health Measurement Example

As you measure and track you will know if your walking is impacting your BP. It will also help you to regulate how many steps I need to walk to achieve the desired weight. In some cases, weight loss might not be effective; managing diet and sodium intake might be more effective. Some measures can act at cross‐purpose and drive undesirable outcomes. Once you identify the Ben and BoB measures, the challenge is not to stop there but also determine the right lead measures that will lead to them or the desired outputs and outcomes.

Figure 8.9 is a template that can be adapted for projects, programs, portfolio, or PMO. Additional columns like Progress, Actual Results, or Assumptions can be added to the template to create a scorecard.

DNA Element Objective Key Results Customer / Stakeholder
1.0 Business / Organizational Objective 1.1 PMO or Program or Project Objective 2.1 Key Result Outcome ⟵ Measure (BoB) 2.2 Key Result Lead Outcome Measure (Leads to BoB) 2.3 Key Result Output/ ⟵ Deliverable Measure (Ben) 2.4 Key Result Lead Output Measure (Leads to Ben) 3.0 Who Is Impacted (Who cares?)
Strategy (How will we know if we are in alignment & achievement of org. strategic objective?)
Execution (How will we know if we are executing effectively?)
Governance (How will we know if we are meeting gov./compliance criteria?)
Connect (How will we know if we are connecting & engaging key customers & stakeholders?)
Measure (How will we know if we have effective measurements & reporting?)
Change (How will we know if we have effective change management in place?)
Learn (How will we know if we are effectively learning?)

Figure 8.9: Strategy Execution Measurement Framework

Source: © J. Duggal. Projectize Group.

Figure 8.10 provides illustrative examples for PMO measures and how to use this template for PMOs. To start with, list the business/organizational objective (1.0), collaborate with stakeholders to define success with PMO objectives (1.1) in the DNA areas. It is important to start with the areas with the most pain. Next, discuss and list the key results you want to achieve in a given time frame. Next, classify the key results and make sure you are covering outputs or Ben (2.3), and outcomes or BoB (2.1). Then also make sure you discuss and list what will lead to BoB (2.2), and what will lead to Ben (2.4). In the beginning, it can seem confusing, but as you use it, you will find your own way that works for you. The idea is that the template simply provides a way to focus on outputs and outcomes and what leads to them. Also, it is a visual tool that highlights the areas that you are working on, and whether you have balanced output and outcome measures, and what leads to them.

DNA Element Objective Key Results Customer / Stakeholder
1.0 Business / Organizational Objective 1.1 PMO Objective 2.1 Key Result Outcome Measure (BoB) 2.2 Key Result ⟵Lead Outcome Measure (Leads to BoB) 2.3 Key Result Output/ Deliverable Measure (Ben) 2.4 Key Result ⟵Lead Output Measure (Leads to Ben) 3.0 Who Is Impacted (Who cares?)
Strategy (How will we know if we are in alignment & achievement of strategic objective?) Cost-reduction by 10% Alignment of Projects to strategic objectives – Review and improve portfolio selection & prioritization process Cost savings of X% due to discontinuation of nonaligned projects Monthly portfolio review & prioritization – targeted decision‐making for X% reduction X% projects meet strategic alignment criteria X% of projects in portfolio pipeline process Executives PMO Finance Project/Program Managers Other functional areas impacted
Execution (How will we know if we are executing effectively?) Improve Project Success rate – Implement PM Framework by end of Q1 X% Impact on revenue 4% Cost savings (due to penalty cost avoidance) X% Less rework Zero penalty cost X % of projects meeting customer requirements X% of projects applying std. PM framework Customers Project / Program Managers Team Members Executives Finance PMO
Governance (How will we know if we are meeting gov. / compliance criteria?) Ensure benefit realization of projects – Implement stage‐gate process X% impact on revenue due to focused benefits realization X% Cost savings X% projects assessed for benefits realization stage‐gate criteria & decision making to discontinue projects that do not meet benefits realization criteria X% project compliance with stage‐gate process X% project & program managers trained in benefits realization & updated stage‐gate process Customers Project / Program Managers Team Members Executives Finance PMO Governance committee
Connect (How will we know if we are connecting & engaging key customers & stakeholders?) Improve PMO Stakeholder Engagement & Satisfaction X% cost savings due to greater adoption of PMO cost‐focused processes X% increase in PMO Net Promoter Score (NPS) leading to wider stakeholder engagement X% Stakeholder satisfaction ratings of higher than 4 on a scale of 1−5 Interviews with 90% of key stakeholders Stakeholder survey with 85% response rate PMO Stakeholders
Measure (How will we know if we have effective measurements & reporting?) Measure PMO Value – Conduct PMO Delight Index (PDI) Survey [PDI explained in Chapter 12] X% cost savings due to higher PDI & adoption of PMO PMO Delight Index (PDI) above 3.5 PDI survey with X% response rate X no. of PDI awareness sessions X no. of targeted social media X no. of incentives for survey response Executives Functional Areas PMO Stakeholders
Change (How will we know if we have effective change management in place?) Increase adoption rate of project deliverables / systems – Implement change management processes & training X% cost savings X% usage & adoption rate of deliverables 15% increase in customer satisfaction & NPS X% projects/ programs apply change readiness assessment Train X% of project/program managers and project reams in change mgt. Project/Program Managers Team Members End users Customers PMO
Learn (How will we know if we are effectively learning?) Continuous improvement by harnessing lessons learned – Implement collaboration tool X% Cost savings due to greater reuse / cross‐sharing of cost saving ideas X no. of application of lessons learned stories Tool NPS above 4.5 X% adoption of collaboration tool X no. of Collaboration tool awareness & training sessions attended by X% of project teams Project/Program Managers Team Members PMO

Figure 8.10: Strategy Execution PMO Scorecard

Source: © J. Duggal. Projectize Group.

Figure 8.11 provides an example of how to use this framework for a project. As we discussed earlier, typically project measures primarily focus on one element of the DNA—Execution—and only on output measures, like on‐time, within budget execution. This framework provides a visual template for a balanced approach covering outputs and outcomes, with lagging and leading measures, with a broader perspective including the other elements of the DNA, which are often not measured and thus not managed as well. To apply, collaborate with your project sponsor and key stakeholders to define project success objectives for the DNA areas. Start with execution and strategy, and then you can add other areas as appropriate. It is not necessary to define all of the areas and each of the key results columns in the beginning. The sample measures in the figure are only illustrative. Your specific measures will depend on your context and what your stakeholders care about. As you progress, the template provides a visual means of highlighting your focus areas and measurement.

DNA Element Objective Key Results
1.0 Business/ Organizational Objective 1.1 Project Objective 2.1 Key Result Outcome Measure (BoB) 2.2 Key Result ⟵Lead Outcome Measure (Leads to BoB) 2.3 Key Result Output/Deliverable Measure (Ben) 2.4 Key Result ⟵Lead Output Measure (Leads to Ben)
Strategy (How will we know if we are in alignment & achievement of strategic objective?) XX Revenue target Ensure alignment – Project deliverable aligns to achievement of business objective (X% impact on revenue) Impact on revenue (XX Revenue target from project) Track & measure alignment (X% Deliverables targeted for revenue goals) Deliverables aligned with business case goals (X% Deliverables aligned with business case) Review business case alignment goals (X% Review of business rev. completed)
Execution (How will we know if we are executing effectively?) Successful project execution (within X% variance & X% impact on revenue) Impact on revenue (XX Revenue target from project) Customer & Product Owner Engagement (Meet X% customer requirements) On‐time within budget (within X% variance) % complete Schedule Performance Index (SPI) Cost Performance Index (CPI) Velocity Burndown (within X% variance)
Governance (How will we know if we are meeting gov. / compliance criteria?) Meeting required governance / compliance criteria Cost avoidance No additional penalty costs Meet X% risk assessment criteria Pass project review/audit (4 or greater rating on review) Compliance with governance reqs. (X% Completion of compliance checklist)
Connect (How will we know if we are connecting & engaging key customers & stakeholders?) Improved stakeholder interactions (NPS score; No. of new customer acquisitions) X no. of new customer acquisition X% Customers at NPS score over 4.5 Meet stakeholder needs (Implement X no. of stakeholder requested improvements) Meet weekly stakeholder meeting targets Meet stakeholder survey targets
Measure (How will we know if we have effective measurements & reporting?) Effective Project Reporting (timely availability of reports for effective decision making & course correction) X Impact on revenue or cost savings due to timely course correction No. of course correction actions taken based on timely data & reports X% timely project reporting X% timely data input
Change (How will we know if we have effective change management in place?) Greater adoption of project deliverable Impact on revenue (XX Revenue target from project) X% Customer sat NPS score over 4.5 Increased adoption target by X% Change readiness assessment & planning (Meet X% change readiness target)
Learn (How will we know if we are effectively learning?) Effective learning and application of lessons learned Cost saving due to application of lessons Apply & improve process (X no. of lessons applied stories) No. of lessons documented (Target no. of lessons input in collaboration tool) Retrospectives (Meet X% retrospective target)

Figure 8.11: Strategy Execution Project Scorecard

Source: © J. Duggal. Projectize Group.

In some instances, it may be hard to quantify outcomes and link them to impact on revenue or cost savings, but it is still effective to have a target. In these instances, focus on the outcome lead measure that will ultimately impact revenue or cost. No doubt, this is a tough exercise, but it is worthwhile because it forces the linkage and measurement of all elements of the DNA in a holistic way. To start with, you do not have to complete OKRs for all elements of the DNA. Focus on a couple of the pain‐point areas and add others as you evolve. Also, measures for some elements may not be relevant in every instance.

MEASURING AND SHOWING PMO VALUE

Showing PMO value continues to be a top PMO challenge. 39 percent questioned the existence and need for a PMO, according to an ongoing PMO study by the Projectize Group. The belief that PMOs improve overall project management effectiveness and contribute to project success may be true. But it is challenging to show it. PMO value is elusive at best, as it is often hidden in improvements and efficiencies that are intangible and hard to measure. Only 15 percent of PMOs even measure themselves in the survey mentioned above. Typically, reactive approaches are patched together to demonstrate the elusive benefits when the PMO is under the gun. There is a lack of a consistent approach to better substantiate the value of a PMO.

In my initial experience leading a PMO, I remember the frustrating part about measuring the PMO was that we knew we were providing a lot of good services—we could feel the value, but it was hard to show it. Like a typical PMO, we were improving project management processes, increasing efficiencies in project delivery, and providing project support and guidance, but it was always a struggle to identify and measure these benefits. A lot of the PMO's value is buried. The question is, how do we find the hidden value? Often, it is indirect and hidden in improvements and efficiencies that are hard to capture. There is a gap between the perceived value versus measurable value; 80 percent of respondents confirmed that their PMO had some value, but only 35 percent were able to measure it, according to our survey.

When we ask PMO teams, “What is the value you are providing?” they proudly list:

  • Consistent methodology and standardization
  • Repeatable processes
  • Talent management trained and certified PMs
  • Standardized tools and templates
  • Projects on time and within budget
  • Improved execution and delivery
  • Lessons learned repositories
  • Organizational change management
  • Facilitating strategic initiatives

You can see how these are Ben‐oriented. The challenge is to translate these into BoB‐oriented results that executives, customers, and other stakeholders care about. How do you show the linkage of PMO activities and outputs to value provided? How do you show how the benefit of these activities will lead to increased revenue, cost savings, or cost avoidance and overall desired results and impact?

This chapter has provided a comprehensive approach to developing a measurement approach that can be used to identify and show PMO value. Following are some additional consideration:

  • Clarify the purpose of the PMO. The purpose of the PMO should be based on the overall context, business objectives, and strategy of the organization. If everybody is not on the same page regarding the purpose of the PMO, it may lead to measuring the wrong things and focusing on initiatives that do not align to the purpose of the PMO. Collaborate with your stakeholders to define success and value.
  • Identify value and selecting appropriate metrics to measure it. What is important to the stakeholders from their perspective needs to be identified and defined. The PMO needs to determine how it can address stakeholder needs and provide value to them. Metrics of how that value could be measured needs to be discussed and selected collaboratively with the stakeholders. Different sets of metrics may be required for different stakeholders. The current state needs to be baselined to track future changes and improvements.
  • Link the measures. It is crucial to show how the measures and metrics link to business objectives and related strategies. Additionally, any linkages to other stakeholder groups need to be investigated. Establishing links to the purpose of the PMO, business objectives, organizational context, business fit, and stakeholder needs is the key to individualizing and personalizing value from their perspective. The above measurement template links these elements in a simple way.

Illuminating PMO Value

PMO value is not obvious or is not going to surface automatically; it has to be illuminated and highlighted. As in other endeavors, to prove that we have accomplished a feat or have been somewhere, we take pictures. Similarly, the PMO has to develop and show snapshots of its accomplishments and the value it is providing from different perspectives. To articulate value, it must be packaged, marketed, sold, and communicated with a PMO marcom (marketing and communications) plan. Managing value should be a proactive process that can form the basis for managing stakeholders and their expectations. The PMO should not shy away from the value question; rather, the PMO should be actively engaged in creating a dialogue with the stakeholders by finding out what is important to them, explaining why some metrics are being collected and what you are going to do with them, and seeking ongoing feedback. Metrics should be used as a medium, not just a technique, for articulating and communicating value.

Chapter 12 provides a template for the PMO delight index (PDI) to measure PMO value.

Sustaining PMO Value: Managing Value Dissipation and Benefits Erosion

One of our client organizations was in the process of implementing a PMO for over two years, and they had successfully implemented several PMO initiatives, including portfolio management and a standardized project management methodology framework with a high degree of executive support and sponsorship. However, when a couple of projects were not on track, the PMO was under the gun, and the overall value of the PMO was questioned. PMO value can quickly dissipate if projects success rate does not improve and projects start to slip. There is a belief in the eyes of management that project execution should be successful and there should no longer be any issues with projects because the organization has a PMO. Management attributes any project failure to the PMO, regardless of whether PMO is directly responsible. Therefore, it is crucial for the PMO to set the right expectation regarding its role in successful project delivery; otherwise, it might erode the benefits and undermine perceived value in other areas as well.

Value depends on the credibility of the measurements. It can dissipate if there is suspicion around how that value is being calculated and reported. To establish tangible value, the benefits must be based on a sound premise and accepted assumptions. The cause and effect must be established based on a credible formula validated with proof and evidence. Refer to the key measure profile template, which helps define these elements and brings rigor and credibility to the process. If a governance policy to deal with intangible measures is in place, it can help in removing doubts about using intangibles. This governance policy can spell out whether intangibles should be considered, which kinds of intangibles, and how they can be used.

The perception of complexity or bureaucracy of PMO processes and templates and tools can lead to suboptimal usage, which erodes the benefits and diminishes value. Another reason why value tends to dissipate over time is the law of diminishing returns. As the benefits become transparent and integrated, they are taken for granted, and people change their perception and forget how bad things were without the PMO.

NEXT‐GENERATION MEASURES

The move from efficiency and effectiveness toward experience is driving a focus on outcomes. We live in an outcome economy, and next‐generation measures must be outcome‐oriented and be able to influence performance in real‐time. According to Joe Barkai, in his book The Outcome Economy, he explains that in an outcome‐based economy, companies create value by delivering meaningful and quantifiable business outcomes, not just promises of outcomes. For example, Rolls‐Royce no longer sells aircraft engines but rather assumes the responsibility for “time‐on‐wing.” Airlines do not buy engines; they pay for engine availability or, you might even say, for lift power.

In the age of the Internet of Things (IoT), sensors are ubiquitous and provide enhanced capability to measure things that were not possible to measure easily. What if you could improve the performance of something you have been doing for over 20 years consistently at the same level with no changes, with next‐generation measures? This is exactly how I improved my running distance and pace with a sensor (Lumo run) that could measure cadence (steps per minute), breaking (change in forward velocity), bounce (vertical oscillation), pelvic rotation, and pelvic drop. Notice these are all lead measures that influenced my performance, as I can hear the device provide real‐time feedback in my ears while I am running. For over 20 years, I had just tracked lag measure of how many times a week I ran, with no impact on performance. From running sensors to smart yoga pants, to sensors in cars that promote safe driving, the next generation of measures are impacting performance in real time.

Project Management Artificial Intelligence (PI)

Project bots and digital assistants perform estimating, budgeting, and sprint management activities. As bots and sensors expand their understanding, new metrics that were not possible to capture will be used in the areas of quality, effort, performance, experience, learning, and change. New layers of meta‐data will help in driving new metrics and insights that were not possible before. Remember, the example of decreasing the number of bugs above—AI‐based system are linked, to know who made the changes to the code, and link the bugs reported to the line of code and the tasks related to it. Now, you have real‐time, actionable measures to get to the root cause of the problem. Overtime PI will move from descriptive (what happened), and diagnostic (why it happened) measures, to providing predictive (what will happen), prescriptive (what action is required), and intuitive (learning and resolving issues before they happen) and impacting performance in real‐time.

DEVELOPING MEASURE INTELLIGENCE

Use the following checklist of questions to reflect and develop measure intelligence. Pick and prioritize what to focus on over a period of time to enhance governance intelligence.

Objectives

  • How can we identify the right essential question that helps focus on the right thing in the sea of data?
  • What is the crucial question that needs to be asked?
  • How can we define success? What are we trying to achieve?
  • Do we know the business impact we are aiming for?
  • How can we use OKRs to clarify the purpose of measurement? Are our measures stakeholder‐ and customer‐centric or self‐centric?
  • What is the crucial question that customers care about?
  • How can we define and measure customer value?
  • How can we use metrics to increase customer satisfaction and experience?
  • Do our stakeholders care about the things we measure?
  • Are we using holistic and interconnected customer metrics such as net promoter score (NPS) and measures of customer satisfaction, customer success, and customer loyalty?
  • Are we using metrics effectively to improve our NPS and create customers?
  • How can we better measure customer and stakeholder experience and perception?

Key Results

  • Do current measures describe relevant current and past status and performance data?
  • Do current measures help predict and forecast future outcomes and results?
  • How can we utilize leading metrics that drive future performance and results?
  • How can we link and balance metrics from all elements of the DNA of strategy execution?
  • Do current measures from other areas of the DNA cascade and align to strategy?
  • How can we balance both, output and outcome measures?
  • Do current measures provide insight and new ways of looking and dealing with existing challenges?
  • Do current metrics reveal risks, barriers, and opportunities?
  • How can we measure hidden value?
  • How can we identify the key measure and related algorithm that can have a core impact on performance and results?
  • How can we focus on behavior measures that impact performance?
  • What counter‐measures do we need to balance the behavioral impact of measures?
  • How can we utilize metrics to assess overall impact?

Reporting

  • How can we do a better job of communicating and illuminating value and results?
  • How can we provide and emphasize context along with metrics to make sure metrics are interpreted correctly?
  • How can we illustrate comparison and contrast to provide greater context?
  • How can we evaluate and communicate causality and linkage of measures?
  • How can we combine and communicate across to reduce provincialism and siloed approach to measurement?
  • How can we improve the timing and timeliness of our reports and dashboards?
  • How can we do better with separating the noise from the signal?
  • Are our metrics credible to our customers and stakeholders—in how we capture, calculate, track, and report metrics?
  • Are we using the right sensors, tools, and apps to collect, analyze, and communicate metrics?
  • How can we improve the effectiveness of our reports, dashboards, and scorecards?
  • How can we simplify and communicate more with less?

Action (Actionable)

  • Are we using metrics just for autopsy or treatment?
  • What can we do with the measures we track?
  • Do current measures inspire appropriate action?
  • Are our measures actionable?
  • What actions can we take to counter the negative consequences of measures?
  • Do current metrics help us make better decisions and prioritization?
  • What would happen if we stopped measuring some things?
  • What reports can we discontinue?

Learning (Feedback)

  • Do current measures provide appropriate feedback loops?
  • Do we get feedback on how to improve and adjust our measures?
  • Are we tracking, measuring, and providing real‐time feedback that drives the desired behaviors?
  • How can we minimize the loopholes to game and manipulate metrics?
  • How can we improve our data collection?
  • Are we setup for rapid and frequent feedback?
  • Are we learning and adjusting our measures?
  • How can we improve our measures with balanced and holistic measures?
  • What are we not measuring that we should be measuring?
  • Are we building a metrics library over a period of time?

KEY TAKEAWAYS

  • Identify and be clear about the purpose of measurement. Do you know the goal or objective you are trying to achieve? Do you know how your customers and stakeholders define success? Identify the crucial question that will help you focus on the right things.
  • Collaborate with customers/stakeholders to define success and value and select appropriate metrics. It is important to engage them to understand their challenges and what value means to them.
  • Develop and sharpen measurement intelligence in the five DNA strands of measure: Objectives, Key Results, Reporting, Action, and Learning.
  • The Strategy Execution Measurement Framework provides an integrative approach to measurement. It helps to focus on the multiple dimensions and optimize the whole, rather than isolate some or overdose on others. It promotes a sense of ownership and linkage to outputs, outcomes, results, and impact.
  • Make Ben and BoB your new besties. You can channel Ben (Benefit) and BoB (Benefit of the Benefit) by focusing on the distinction between outputs and outcomes. Ben is limited to the benefit of the output, whereas, BoB shows real value regarding business outcomes. Ideally, you need to strive for BobbyBen, who is bimodal and balanced and can leverage both aspects to achieve desired results and make an impact on strategic execution.
  • Measurements drive behavior, and every measure has a consequence. The key is to identify the leading measures that drive the desired behavior.
  • A simple way to practice the ideas of this chapter is to define and measure success with the OKR framework and define balanced measures/metrics for Ben&BoB, and what leads to each of them.
  • Seek feedback and adjust measures. Check and validate the metrics, collection mechanism, calculation formula, and outputs with stakeholders to see if this makes sense to them. The idea is to create a dialogue to keep inquiring what is valuable to them and the organization.
  • Everything that may be of value cannot necessarily be measured. To harness the elusive nature of project management and PMO value, it is critical to manage perceptions and perceived value.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset