CHAPTER 4, Part II: Process Management Advances Data Creation

As used here, a “process is any sequence of work activities, characterized by common inputs and outputs and directed to a common goal.”26 Simple enough. But note that the term, in and of itself, applies equally well to the simple task of making yourself a cup of coffee as it does to multi-year efforts to integrate departments after a multi-billion dollar merger. Here we are primarily interested in the data-creating aspects of a process. Thus the process of “making yourself a cup of coffee” does produce a cup of coffee. It may also produce data, as in a note “buy more coffee.”

I call simple processes “little-p processes.” Taking customer orders and invoicing customers are good examples. Each produces a small amount of basic (and important) data. The term “little-p” is relative—little-p processes usually involve only a single team, a unit of work can be completed in a few hours or days, and the actual work is often conducted within a single department, with few organizational interfaces or handoffs.

Little-p processes are almost always parts of larger ones. For a hospital, the most important process may start when a patient checks in, moves to her room, goes through a sequence of clinics, operating theaters, and labs, and concludes when all of the bills are paid. The associated data creation and usage are also part and parcel of a “Big-P Process.” Organizations, even massive ones, have no more than a dozen or so Big-P Processes. Middle-p processes occupy the ground in between.

Instructions:

  1. Manage data creation as a process.
  2. Vest overall responsibilities for high-quality data creation in a process owner or process management team.
  3. Extend the voice of the customer to align work in the direction of the customer.
  4. Look for opportunities to improve on the interfaces between steps. In doing so, complement functional management.
  5. Build organizational capabilities. Process managers must learn to lead through influence, not formal authority.
  6. Employ an “embedded data manager,” with solid expertise in the “how-tos” of doing this work effectively.

One especially important process involves the creation of data definitions and all the instructions of both parts of this Chapter apply. Still, there is an underlying dynamic and some subtleties that those responsible for this work must understand if they are to be effective. The final section of this Chapter explores these and summarizes special instructions.

Manage Data Creation as a Process

See Figure 4.9, which presents the process management cycle.27 From a data perspective, process management “works” for four inter-related reasons. First, it aligns the work in the direction of the customer. Without process management, customers, other than the boss, receive short shrift.

Second, this alignment helps bridge silos between work teams and departments. This is a big deal—earlier I had noted that a poor connection between customers and data creators contributed to every data quality problem I have worked on. That problem also arises during the creation of more complex data products, especially those that require contributions from several creators working in silos that “don’t talk.”

Figure 4.9 Process management cycle.

Third, the process framework clarifies managerial responsibilities for data creation, the “who” this book promises.

Fourth, the process framework is unmatched at bringing the tasks described in Part I into a powerful, coherent whole. You have to do them anyway. You should do them in a powerful way!

I’ve already discussed the basics for some steps. In the remainder of this chapter I’ll expand on select topics.

Clarify Managerial Responsibilities

I can’t overemphasize the importance of clear managerial responsibilities for data creation. The process management cycle takes care of that right up front, in step 1. The basic idea is that a “process manager” (be it a single executive/owner/manager or, if the process is large and complex, a process management team) is held accountable for end-to-end performance. From a data perspective, this means delivering high-quality data to customers at a reasonable cost. In some cases, data creation is the raison d’être; in other cases, physical product may also be created or moved.

A reasonable starting point for working out accountabilities includes:

  • A mid-term quality target, such as “Within two years, have in place a quality program such that our most important customers receive far better data from us.”
  • Authorities to effect changes to meet the target (e.g. budget, staffing, process design).
  • Authority to import or build the skills needed to complete the work called for throughout the process management cycle.

Process management stands in contrast to functional management in that it points horizontally, across tasks, functions, and departments, in the direction of the customer. Functional management points vertically, up and down the management chain. Many people fall into the trap of viewing the two as competing management styles, but I find that shortsighted. Functional management promotes the effective and efficient completion of tasks, and it is incredibly important that tasks be completed well. Process management, on the other hand, aims to knit tasks together into a more powerful whole.

Effective process managers take steps that help them add enormous value while avoiding the more obvious conflicts with functional management, as described in the following sections.

Extend the Voice of the Customer

Effective process managers spend an enormous amount of time building communications channels, emphasizing the Voice of the Customer, and aligning the work in the direction of the customer. Importantly, they recognize that people’s bosses are customers.

Figure 4.10 presents a more complex process, featuring three suppliers, four customers, and six steps. As the figure depicts, an effective process owner not only makes sure that all hear the voice of the customer, but that everyone understands the needs of the next step. This effort extends to data suppliers, both outside the process and the entire company.

Figure 4.10 Effective process managers seemingly go overboard in communications, fixing broken or non-existent requirements channels, and aligning work groups in the direction of the customer.

Look for Improvement Opportunities on the Interfaces Between Tasks/Steps

I’ve mentioned several times that a contributing factor to every data quality issue I’ve worked on for nearly 30 years has been the failure of data creators to understand customer needs. A second factor is broken interfaces between steps needed to deliver on those needs. Work team and departmental silos often lie at the root of the issue. Each team bears responsibility for its own work, but neither feels responsible for the interface. But process owners, with their end-to-end view, do have such responsibilities.

I sometimes say that “silos are the enemy of data quality” because they make much of the basic communications that data quality requires more difficult.

Consider Figure 4.11. Here the focus is shortening the end-to-end cycle time associated with the process, currently seven days. Note that the actual work takes one day, while the remaining six days are spent in queues. Cutting the actual work time in half would take considerable effort and only shorten the total time to 6.5 days. So the effective process owner focuses instead on the queue time, the time between steps, on the interfaces between work teams, when no real work is accomplished. It’s far easier to shorten queue time by improving these interfaces. Here, cutting queue time in half saves three days. Further, it is far easier to get those responsible for steps to work together on such efforts.

Figure 4.11 To shorten cycle time, process owners focus first on “queue time” between steps, not on the way the steps are performed.

Build Organizational Capabilities

Figure 4.12 proposes the “getting started” organizational structure for data creators in the context of the process management cycle (modify it as needed if you don’t use process management). Key features include a process owner (or depending on the size of the process, a process management team). Other teams complete the work associated with data creation, on behalf of the process’s data customers. The embedded data manager assists.

Figure 4.12 Recommended organizational structure for data creation.

Note that the figure explicitly assigns sub-teams to complete steps 2, 4, 5, and 7, respectively, of the process management cycle. The complete list of who does what follows in Table 4.4.

Table 4.4 Who does what in data creation.

no.

Step

Who

Rationale

1

Managerial Responsibilities

Process Owner/PMT

PMT must negotiate responsibility and authority.

2

Understand customer needs

Customer Team

Helps if this work is performed in an ongoing fashion. Having a team that does this (it’s part-time work) helps build skill and sort out commonalities among customers.

3

Understand current process

PMT

This work is usually performed one time (with incremental changes as the process changes). While the PMT may contract the work out, it does not justify a standing team.

4

Measure against customer needs

Measurement Team

Some specialized expertise needed.

5

Establish control

Control Team

Some specialized expertise needed.

6

Set targets

PMT

Akin to setting responsibilities, step 1.

7

Make improvements

PMT and Improvement Team

The PMT selects improvement projects and gets the right people on-board. From there the improvement team takes over. Note: Improvement teams are not standing teams. They disband when completing their work.

All.

Embedded Data Manager

Assistance with all steps, leadership on many.

Learn to manage cross-functionally

Managing a process, even one that crosses team lines within your department, can feel like herding cats. That feeling grows the larger the process gets, and the more work teams, departments, and people involved. After all, these things mean more cats. Worse, conflicts between vertically-oriented functional management and the horizontally-oriented process management are sure to arise. Effective process owners acknowledge these realities and work with them. Of course, the process management cycle is designed to help them do so effectively.

Still, being an effective process manager often requires more. Figure 4.13 presents a notion that I learned at AT&T’s management charm school. The basic idea is simple: The inner circle represents one’s “span of control,” over things he or she can make happen, while the outer circle represents one’s “span of influence,” over things that one can impact, though not directly. As the figure depicts, process managers strive to increase their spans of influence. To be effective, process managers must accept that they have less control and learn how to build influence.

Figure 4.13 Process owners strive to increase their influence.

Employ Embedded Data Managers

While the actual work involved in doing each of these steps is not technically difficult, it may be new and unfamiliar. Embedding expertise in the process moves the work along. I call the person with this expertise the embedded data manager. He or she takes lead responsibility for the measurement and control work and facilitates the customer needs and quality improvement teams. He or she also coordinates the process’s contribution to company-wide initiatives, such as company-wide data quality targets and common definitions. As a practical matter, the embedded data manager knows the data, and helps people interpret and use the data in new, creative ways. In many respects, the embedded data manager is “the tip of the spear” in a data quality program. It is an exciting, multi-faceted role.

It takes both training and experience to grow into the embedded data manager role. Plan on two to five days of formal training, encourage embedded managers to read extensively, and get them to join relevant professional associations. They’ll grow more effective with experience as well. Expect about two years to become fully effective.

A rule of thumb is that a solid data quality program requires about one embedded data manager per 100 people, more when the data or process is highly complex and many more for commercial relationships.

The Fundamental Organization Unit for Data Quality

When I introduced the concepts of data customer and data creator in Chapter 1, I also pointed out that all of us play both data roles—sometimes simultaneously. We use data from others to do our work and others depend on our data to do theirs. While the two roles are separate, they are also linked.

This observation, coupled with my preference to view both roles in the context of process, leads me to Figure 4.14. I call it the fundamental organization unit for data quality owing to its power in describing the base capabilities companies must build.

Figure 4.14 Fundamental Organizational Unit for Data Quality unites the roles of data customers and data creators.

Special Instructions for Creating Common Data Definitions

Few companies enjoy the benefits of clear, shared data definitions. While communications are just fine within silos, cross-departmental communications are strained, resulting in misunderstandings by data customers. Further, the computer systems that support those departments don’t talk well, and translation requires massive hidden data factories. As noted earlier, the instructions of this Chapter apply, though to be effective, a deeper understanding of an underlying dynamic and some subtleties is needed.

Underlying dynamic

I’ll illustrate using two of my grandsons, here called Jack and Charlie. The action unfolds one morning, after the two of them had spent the night with me and my wife. They sleep in the same room at our house and they woke up at about 6:00 AM. But rather than screaming out for me and Grandma to come get them, they just talked to one another—for a good, long while. Finally one of them decided he was hungry and called out. And when my wife Nancy went into their bedroom, Jack said, “Charlie’s hungry.”

Now here’s the interesting thing. Charlie, at 18 months, didn’t know more than a few words of English. But it was clear enough that both he and his 4-year-old brother knew lots of words—enough to hold a 20-minute conversation. Those words were just in their own private vocabulary.

I find this vignette especially enlightening. A new language, let’s call it the Jack–Charlie language, developed quickly in response to the specific needs of two brothers. It may die quickly as well. As Charlie learns new words in English a special language won’t be critical.28

Of course this process is not confined to brothers. Specialized vocabularies develop to support new disciplines, new departments, new problems, and new opportunities. Words take on special, subtle, nuanced, and precise meanings, perfect for the circumstances and unsuitable for other settings. For example, in a team at AT&T I’ll describe in Chapter 9, the terms “risk” and “consequence” took on quite specialized meanings. Unless you’re stuck in a rut, this process is certainly occurring all over your company today, even as you read these words. Language constantly grows and divides.

Many companies mistake metadata (descriptive information about the data) for an esoteric IT problem. After all, the issue often comes up when business language is translated into computer systems. But as this discussion clarifies, the issue is a business issue and must be addressed as such.

This process is normal and healthy, even essential to problem solving. You cannot, and should not, do anything to interfere. Quite the contrary, you should figure out how to take advantage.

In parallel, you must make sure there is enough common language so the company doesn’t deteriorate into a Tower of Babel. Note that this is exactly what Jack did—using English to clue Nancy and me in on what they needed us to know.

Thus, to get in front of the language dynamic, follow these two simple rules:

Rule 1: Do all you can to encourage the growth of specialized language to meet specific needs, when and where needed. Insist on solid definitions for all terms, once they are used by more than a single work team.

Rule 2: Provide the skinniest common language as possible to facilitate company-wide communication.

In practice, this means creating a company-wide process for creating and documenting definitions of important business terms, communicating them through some sort of data dictionary, and building them into databases and applications. I’ll use Aera Energy as an example in Chapter 9.

Common language is not an IT issue

The Jack-Charlie vignette and the two rules apply to all companies and following them requires some skill. Before jumping into that, it bears mention that the need for common definitions usually comes up as the solution to an incompatible systems problem, as companies try to interconnect their systems. The accounting system doesn’t talk to the sales system, which doesn’t talk to the engineering system, and so on. The essence of the problem, though possibly not the details, is the same at all levels, from department to company to industry.

Of course, “systems not talking” is an overstatement. They do, eventually, as people work hard to translate the terms used in one system to the terms used in another. More on this in Chapter 8.

Since the “systems not talking” problem presents itself as a “systems problem,” it is assigned to the IT department, which treats it as a data integration problem. And IT does the best it can. Assisted by a suite of tools, IT seeks to line up the systems, by translating the terminology (e.g., definitions of data item) used in one system to the terminology used in another. And vice versa. If there are n systems, that means roughly n2 translations (though in practice there are fewer because not all systems need to talk to one another). Progress is almost always slow and fraught.

Sometimes IT tries to get in front and drive a common language across the enterprise. The effort goes by many names, such as data integration, enterprise architecture, and master data management. In doing so, it violates the spirit of rule 1 and the letter of rule 2, almost guaranteeing failure. It is akin to asking Jack and Charlie not to talk to each other since Charlie doesn’t know enough English!

To say the same thing from a slightly different angle, the best computer systems faithfully represent the language employed by their users. Just as it is difficult for people to communicate when they don’t share a common language, the same is true for computers. It is actually far worse for computers. Humans adapt and they adapt quickly. Computers, so far anyway, have proven considerably less adaptable. And they’ve not yet shown the ability to develop a new language, as Jack and Charlie did.

The fact that computers don’t talk is not a computer problem. It is a language problem and it can only be solved as such.

Manage the creation of data definitions as you would any other process:

  1. Manage the creation of data definitions as an end-to-end process, with the goal of implementing the two rules noted above. Thus the process produces definitions for all key terms. A few should become common, shared definitions. I find it helpful to liken this portion of the process to forming international standards with a formal method for proposing topics, soliciting the widest variety of standards, ample discussion, and a formal “vote.”
  2. That process must have an owner, perhaps called the Chief Data Architect. Since common definitions must reach across departments, this person should report to the leader of the Corporate Data Quality Team (Chapter 5), not to IT, emphasizing the point that data definitions are not the province of IT. This person must have the considerable personal gravitas needed to carry out this role.
  3. Get the right people involved. Anyone whose work may be impacted by a common definition should have a say in its creation. Embedded data managers play an important role.
  4. Publish all definitions via a data dictionary and make it easy for people to access that dictionary.
  5. Socialize the common definitions more proactively.
  6. Don’t overreach. While hundreds of definitions are needed, even the most complex organization probably requires no more than 100 common ones. For example, Aera (Chapter 9) gets along just fine with 53.
  7. Use all definitions, especially the common ones, in new systems development going forward.

In Summary

Process management is the preferred framework for managing data creation. It is also a terrific framework for pulling the roles of creator and customer into a powerful whole. Even if your company doesn’t do so, use it extensively. It is especially critical that you adopt a process framework in the creation of common data definitions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset