Chapter 6. Defect management

This chapter covers

  • Exploring types of defects
  • Reviewing your brownfield project for defects
  • Understanding the anatomy of a defect
  • Achieving Zero Defect Count

On the majority of software projects, most of the effort is put into the construction tasks required to create something tangible. We focus on the analysis, design, and coding that produces the software. Inevitably during that effort, you’ll make errors and incorrect assumptions that lead to defects. How you react to these defects is an important factor in the health of your project.

Projects have a wide range of techniques, both explicit and implicit, that they employ to disseminate information about defects. The techniques will vary, from word of mouth to tools that enforce formalized workflows. Regardless of the technique, every project needs to have a process for managing defect information.

There’s ample opportunity for friction and pain points to surface in the defect-tracking process. In brownfield projects, it’s common to see large backlogs of defects that haven’t been addressed in many months (or at all). Defects that have been lingering in a backlog for any amount of time will be of questionable quality and should be reviewed. The effort required to determine the quality of a defect creates friction in the resolution process.

On some projects, you’ll step into the fray only to find that nobody from the development, testing, or client teams knows what defects even exist. There’s a difference between having no defect-tracking tool and having no defect-tracking process. It’s possible to stay on top of defects without a tool, but you can’t do it without a process.

Similar to the contaminated code aspect of a brownfield project, it’s likely you’ll inherit a backlog of unresolved defects when you’re starting. Reducing the backlog effectively is a significant pain point when you’re first starting a project—especially when you take into account that many teams don’t consider defect fixes to be “real work.” Although this chapter is about managing defects, we’ll use it to frame an important discussion on how to counter the defects-are-beneath-me mentality.

Trends, recurring scenarios, and specific pain points can be identified by gathering defect-related information. Because pain points are one of the things that you can identify using defect information, why don’t we discuss the pain points that can impede effective defect gathering?

6.1. Pain points

Joni walked into the meeting room full of hope. She was taking a look at the defect list for the first time and her co-worker, Mitchell, was going to lead her through it. Mitchell sauntered in with a stack of papers covered in Post-It notes.

“What’s all that?” asked Joni.

Mitchell dropped his pile on the table with a flourish. “That,” he said, “is my personal defect list.”

Joni looked at the stack of papers with growing dread. “I thought we had a tool in place to keep track of them.”

Mitchell scoffed. “Pfft, that thing is so out of date it’s almost retro. I stopped using it when I started seeing active defects logged against version 1.2. We’re on version 5. So I keep my own personal list.”

Just then Pearl poked her head into the meeting room. “Mitch, sorry to bug you. I’m going to take the defect where the user can’t log in on stat holidays.”

Mitchell started rummaging through his pile. After a few moments, he looked up. “I don’t have that one with me. I think Bailey was going to tackle it... actually, now that I think about it, that one was fixed a while back. Here, try this one.” And he hands Pearl a well-weathered pink Post-It.

Now, there are some obvious problems with this company’s defect tracking. First, although they have a system in place, it has been allowed to decay. So the team has had to work around this situation by coming up with their own inefficient system. Furthermore, there’s no real way to track the progress of defects in their system. One also wonders how the testing team or the client even reports them.

Without a good defect-tracking process, you lose confidence in the application—a situation that inevitably affects the overall quality (see figure 6.1). Your team rarely sees the results of fixed defects; they see only more and more of them being reported. After all the effort you’ve put into building confidence with a functional version control system (VCS), a continuous integration (CI) process, and automated tests, it would be a shame to have it fall apart during defect tracking.

Figure 6.1. Your policy toward defects will affect both your project team’s confidence and the overall quality.

So let’s take a look at how you can manage the inevitability of defects. We’ll begin with a breakdown of the various kinds of defects.

6.2. Types of defects

In section 6.1, you read about one of the classic pain points with defect tracking on brown-field projects: the current tracking tool simply isn’t fulfilling the project’s needs. An easy way to determine if this is the case is to answer this question: Is the team using the tool? If the answer is no (or one of its cousins: “not really,” “kind of,” “we used to,” or any answer that starts with “well...”), frustration and angst are bound to grow between testers and developers (in both directions) as well as between the client and the team.

Assuming there’s a system in place, it’s only as useful as the entries within it. With brownfield projects in particular, you’ll find different classes of defects that aren’t useful—“defective” defects, as it were. This definition includes defects that

  • Have been resolved but not closed (languishing entries)
  • Are incomplete
  • Are duplicates of existing defects or were tested against an incorrect version of the application (spurious entries)
  • Are feature requests in disguise

Figure 6.2 shows each of these types.

Figure 6.2. Defects come in many flavors: languishing, incomplete, spurious, and feature requests.

Each of these defect types has its own nuances, reasons for existing, and special attributes. To understand how to deal with their existence, we first need to look at each type in more detail.

6.2.1. Languishing entries

Languishing entries are defects that were entered at some point in time, probably justifiably, but have never been resolved in the tracking system. Languishing defects may have been correctly resolved by the technical team, closed as nondefects by the testers or client, or just never attended to.

A backlog of languishing defects can indicate a couple of things. It might mean that the team or client has lost interest in using the software and hasn’t updated the status of the defects. Perhaps the software is cumbersome to use, or the team doesn’t see any value to updating the status, or the culture of the team is such that they believe this mundane task is beneath them.

There are other reasons defects could languish, but regardless, their existence is a sign that there’s a problem with the defect-tracking system or the process for managing defects.

6.2.2. Incomplete entries

No matter the age or freshness of the defects in your system, they’re only as useful as the information that each one contains. Many project team members (and clients) are notoriously lax in providing fully fleshed-out details within a defect entry. Examples include cannot log into the system or printing error.

Although it may seem like overkill to provide so much information at the time of defect creation, memories fade over time. Testers, developers, and clients may not have enough information to accurately work on the defect or search for past occurrences. The pain from this problem usually isn’t felt in the initial phases of development, testing, and release of software. It sneaks up on a team over time and quietly wreaks havoc on the efficiency of the process. We discuss the makeup of a good defect entry in section 6.5.

6.2.3. Spurious entries

Of all the types of defects, none are more frustrating for a team member than those that add work without providing any tangible benefit. Spurious defect entries do just that. On brownfield projects that have a historical backlog of defects, there will undoubtedly be defects that aren’t really defects.

There are a few reasons why an entry could be spurious. Two people may have recorded the same defect (duplicate entries). The wrong version of the application may have been pushed to the test team and they logged defects against an incorrect version. The defect could simply be nonreproducible.

Regardless of the reasons behind their creation, this type of defect is sometimes referred to as a ghost, meaning one that you spend time chasing but can never find.

6.2.4. Feature requests

These defects are another favorite for development teams. They’re feature requests masquerading as defects. For example: The Save button doesn’t send an email confirmation to the customer or There’s no way to print an invoice. These are fundamental pieces of functionality that the application should provide but doesn’t.

At some point, all projects seem to acquire feature requests as defects. Depending on the software development methodology you follow (waterfall, agile, or something else), they often lead to arguments about scope and schedule. Being able to distinguish between defects and feature requests isn’t easy; sometimes it’s downright political. As a result, it’s common for this type of defect entry to turn into a languishing entry.

You can tell a lot about a team by its attitude toward defects. In our experience, defects, and the mechanism of tracking them, regularly cause pain on projects, brownfield or not. Although some of the issues can be institutional, and much harder to resolve, significant gains can be made by attending to the defect-tracking and -management processes. Let’s look at what to expect during the initial review of your brownfield project’s defects.

6.3. The initial defect triage

Many brownfield projects treat defects as a separate project stage. Often defects are recorded as the team concentrates on the ongoing development effort, but they aren’t addressed right away. Instead, defects are allowed to pile up unattended with the intent that “We’ll fix them before the next release.”

This mentality is often a precursor to becoming a brownfield application. It can lead to hundreds or even thousands of existing deficiencies that have never even been acknowledged, let alone reviewed. Under these pretenses, defects become stale quickly. You’ll probably find yourself in this situation when starting on a brown-field project.


Tales from the trenches: The worst-case scenario

On one project, our first two months of work were consumed by managing the enormous defect backlog and the effort required to work through those defects. This project was one that had yet to go to production and at the time had over 700 open defects logged against it. As we worked through the defects, we had to be efficient in all the aspects of defect triage, information gathering, and resolution that you’ll read about in this chapter.

Luckily, we had a sizable development team that could take care of fixing the issues. Unluckily, we had defects that were over 2 years old, whose original enterer had left the project, were logged against drastically changed modules, and/or were unclearly documented. As you can imagine, a lot of time was spent on defects that were no longer valid. This effort was required so that the developers, testers, management, and client could regain confidence in the system’s functionality and correctness.

In the end we managed to survive the 2 months dedicated to this task (immediately prior to production release). Beyond the skills to work through a defect mess such as the one we encountered here, the biggest thing we learned is that although Zero Defect Count (discussed in section 6.6) seems impossible to achieve, it’s far better than the alternative that we’d just encountered.

After the mass defect resolution, we stated (not asked) to our management that the development team would be, from that point forward, working under a policy of Zero Defect Count. They laughed at our naive idealism. For the remaining 10 months on that project, we worked hard, every day, to fix defects as quickly as they were entered in the tracking system. During that time, we rarely had more than a couple dozen open defects at any one time. On top of that, we were able to maintain our new feature delivery velocity.

So believe us when we say that we’ve been there and done that. If you’re embarking on this journey, we feel for you—really, we do—but we also know that you can succeed. Heck, if we can, there’s no reason you can’t.


When you join a brownfield project, you’ll inherit all its defects. How should you manage such a large number of items? When analyzing them, you’ll have to determine their worth to the project. But unlike reviewing existing tests (section 4.4), the analysis of defects is going to involve more than just the development team. The testers and business representatives should be included in this process. They’re the people who best understand the expectations that weren’t met when the defect was created.


Come prepared for the long haul

There’s no sense sugarcoating. Unless your idea of a good time is sitting around with a group of people for possibly several hours discussing the minutia of a few hundred bugs, analyzing defects with your team won’t be a fun task. Nothing causes the psychosomatic sniffles like a 3-hour defect bash.

Going through defects en masse is an arduous task and will take its toll—not just because of the length of the meeting required to go through them but because of the ever-increasing sense of dread you feel as you slowly become aware of the sheer numbers of defects.

Use whatever you can think of to keep the analysis meeting productive. Bring in pizza, take breaks, create a soundtrack that plays in the background (one obvious suggestion for an opening song: “Also Sprach Zarathustra”).

One pitfall to watch for: it’s easy to overanalyze the early defects, leading you to short-shrift later ones when fatigue sets in. If you find this pattern occurring, shake things up a bit. Switch meeting rooms or take the team out for coffee (or something stronger). Schedule another meeting if you have to, but in our experience, the team’s attention span gets shorter and shorter with each successive one.


When viewing historical defects the first time, the goal is to get through them quickly. Don’t spend a lot of time analyzing why a defect exists or how you’d solve it.

Depending on the number of defects you have in your backlog, you may not have time to get all the information for each one. The meeting is about deciding three things about each defect:

  • What can and can’t be done about it?
  • Who should do it?
  • How important is it to fix it?

With that in mind, there’s one of three things you can do with a defect during triage: close it, assign it to the testing team, or assign it to the development team, as summarized in figure 6.3.

Figure 6.3. In the initial defect review, the goal is either to close the defect or assign it to the development or test team.

Chances are the only ones that can be closed are spurious defects. You may also encounter, and be able to close, defects that have been unconsciously resolved—ones that have been fixed without realizing it. These defects are a relief in the triage process; most will require more work before being resolved.

Defects that can’t be closed need to be assigned. Again, you don’t want to use this time for drawn-out design discussions. There are two possible assignments: the development team or the testing team. If you can easily reproduce the defect and the testing team or client confirms that it’s still a valid defect, it goes to the development team. Otherwise, it goes to the testing team to investigate further. Their responsibility is to provide further information on the defect, such as steps to reproduce it or clarification about the business functionality that’s not being met.


Bring the testers and client

It’s important to include the testing team and the client at the initial defect review. They will be invaluable in determining context around many of the defects. Without them, there’s a definite risk that some defects will be closed erroneously due to misinterpretation.

If the client isn’t available, use a client proxy. A client proxy is a person who has detailed knowledge about the business domain and has been empowered to act on behalf of the client. Although they may not be rooted in the business on a day-to-day basis, they either have historical context to work from or the ability to quickly gather contextual information from the client.

A client proxy won’t have all the answers. There will be times where she has to return to the client for clarification and possibly even decisions. Effective client proxies are trusted by both the business and the project team to accurately represent the needs of the client and thus they’ve been given the authority to act on behalf of the client.


During the initial review, you’ll attempt to reproduce several of the defects. Because you’re working with a large backlog of defects, efficiency is important. Dealing with difficult-to-reproduce defects significantly decreases your triage velocity.

Backlog defects will, by nature, be hard to reproduce. The longer a defect is left unattended, the more likely there will be too little information to reproduce it. It’s also possible that the application has been altered and the behavior seen in, or around, the defect has changed. Your defect resolution efforts will run into both of these situations.

One of the primary reasons that the defect can’t be reproduced is that the information contained in the defect report either isn’t clear or is incomplete. If the defect’s creator is at the meeting, it’s possible that you’ll be able to get quick clarification from him.

There will be times when you engage the original defect creator in a discussion and he isn’t able to provide you with an immediate answer. Like you, the originator of the defect probably hasn’t worked with it for some time. It’s entirely possible that they may not remember the scenario, the intricacies of the business, or the goal of that portion of the application. Instead of working with speculation, assign the defect in question to them so they can take their research offline. The time spent doing better research on the situation will provide you with a far better solution and, ultimately, a better software product.

If you have to hand the defect off to another person for further research, take the time to note on that defect entry what you’ve done to that point. This note will help the person doing the research as well as the person who has to reproduce it next. That may be the next developer assigned to resolve the defect, or it could be the tester who’s working to prove the fix that the development team implemented.

In the end, if the defect truly isn’t reproducible by all efforts, don’t be afraid to close it. Be sure to document all your efforts in the event the defect arises again in the future.

After working your way through the defect triage, the real work begins. Your testers have their assignments and you, the development team, have yours. Next, we’ll see how to plan out your work.

6.4. Organizing the effort

Your first obstacle when working through a large defect list will be one of scheduling. Rarely is the project going to have the luxury of assigning the entire development team to the defect backlog at one time. Even if management makes a commitment to quality, they always seem to dictate that the project must continue moving forward with new features while the team figures out a way to work the defect backlog simultaneously. As figure 6.4 shows, working on defect analysis in parallel with ongoing development is possible, but pieces of the analysis become more difficult.

Figure 6.4. New feature development and defect resolution will almost always occur in parallel.

As stated in previous sections, defect analysis regularly requires the input of the business, testers, and the original creator of the defect for a full understanding. If the development team is paralleling the defect analysis with ongoing development, it’s a safe guess that the rest of the project team is also going to be working on the ongoing development while they’re being asked questions from the defect analysis. And although your team may be prepared to handle the parallel work, the other project members may not have had the same pep talk. As a result, information gathering from those groups may be slower than you’d hope.

Because this type of delay can occur, it’s important to involve the entire project team at the beginning of your defect effort to ensure you can get through the backlog with some efficiency. Be sure your testers and the client know that you’ll be calling on them about new features and existing defects. That way, they don’t get the impression that the development effort is fragmented and unfocused.

6.4.1. Tackling the defect list

If there was one thing that could be designated as “nice” about a large defect backlog on a brownfield project, it’s that you’re not starved for choices. There are a lot of defects to choose from. If you have outstanding questions with other team members who are busy, your team can carry on with other defects and return when answers are provided. Although not an ideal workflow, this technique can be effective when attention is paid to the outstanding questions.

The reason this workflow isn’t ideal is that it’s better to start and complete each defect as a single contiguous piece of work. One of the reasons that a large defect backlog exists is precisely the mentality of “We’ll get back to those when we have time.” Don’t let this mentality infect your effort to eliminate the backlog. Be diligent in your pursuit of the needed clarifications. Be persistent to the point of annoyance when requesting information from other teams.


Warning

It can be a dangerously fine line to tread between annoying and persistent, but it can be effective in small doses. If you’re consistently determined near the start of a project, you’ll establish that this effort is important and that you aren’t going away until it’s done. The biggest risk you run is that other groups on the project team will shut down communications rather than deal with you—which is why we suggest parsimonious use of this tactic and that it’s always practiced with politeness and gratitude.


While working on the defect backlog, set milestones for the project to aim for. Depending on the size of the backlog, you may have a milestone of completing the analysis of the first 100 defects. Not only are these goals good for the morale of the backlog analysis group, but they also provide the backlog team with reminders to thank the other groups and people on whom they’ve been leaning during the process. Regardless of the size of the milestones, or even if you use them, it’s important to remember to include other contributors in the process. They should receive their just dues as well as seeing, and hopefully buying into, the group’s effort in increasing product quality. It’s another small and subtle step that can be used to create a project-level culture of quality.

6.4.2. Maintaining momentum

No matter the level of commitment to quality the team has, working on defect analysis can be a tedious and draining experience. If you keep the same group on defect analysis for any amount of time, their morale will decline and, with it, the quality of defect resolutions. If you’re in a position where some developers are doing ongoing development in parallel with the defect resolution team, try swapping teams out at regular intervals.

One way that we’ve handled this inevitable decay in morale was to swap different people into the defect analysis role at the end of new feature iterations (see figure 6.5). This technique helped maintain a steady analysis rhythm for the team. It also made it easy to include information in verbal or written reports for management, who will certainly want to know how the effort is proceeding. The most obvious result of this method of managing the work is that fewer people will be working on the ongoing development effort. In some situations, this approach may not be feasible.

Figure 6.5. Swapping team members regularly from defect resolution to new features helps keep morale from flagging.

Another method that has worked well for us is to tack on a defect-resolution task at the end of each ongoing development task. With this technique, once a developer has finished a task assigned to her, she can then grab three or five defects off the backlog.

In this scenario, the analysis effort tends to proceed in fits and starts so it’s much more difficult to know the speed at which the work is proceeding. In the end, the effort usually spreads out over a longer time frame when using this approach, but it also promotes the idea that defects are everyone’s responsibility.

Although both of those techniques have drawbacks, they do provide one significant benefit: collective code ownership. Everyone is responsible for fixing defects, which will help ingrain the belief that anyone can work on any part of the application. Collective ownership is an important aspect of your commitment to quality.

Regardless of the techniques you implement when resolving your existing defect backlog, make sure to pace yourself. The most important task is to resolve defects correctly, and the size of the list is the second-most important factor. Some defect backlogs are intimidating and need to be handled in smaller pieces. Don’t be afraid to take on only what the team can handle at one time. Any amount of progress is positive. In section 6.6, we’ll talk about the concept of Zero Defect Count, which is a goal to strive toward. This goal will provide a lasting positive effect with regard to the quality of your project.


Challenge your assumptions: Collective ownership

Collective ownership promotes the concept that all developers feel responsible for making all parts of the application better. More traditional thoughts on code assignments create silos, where a very few people are responsible for one small portion of the application. As a result, you end up with the data access guy, the web services guy, the UI guy, and so on. None of those people will feel comfortable entering someone else’s code.

Regularly you hear the phrase “hit by the bus” on projects—which is kind of morbid but appropriate. People are talking about the risk associated with having one person responsible for a critical portion of the application. Silo’d code responsibilities promote a hit-by-the-bus mentality. Instead of dispersing the risk, the risk is being concentrated and focused into very small areas.

Collective ownership doesn’t mean that all developers must have the same level of understanding across the codebase. Rather, it implies that the developers are all comfortable working in any place in the application at any time.

If you want to increase the ability of your team to react to changes or defects, collective ownership is one of the best methods to employ. You need time and team discipline to implement it, but the long-term benefits are remarkable.


Regardless of the method you’re using while working through your defect list, you’ll almost certainly encounter one particular type of defect regularly: spurious defects. This topic warrants a little special attention.

6.4.3. Dealing with spurious defects

While you’re working to reproduce a defect, it may become clear that the entry isn’t a problem with the system. In fact, a defect entry may not be a problem at all. It may be a misunderstanding of system or business functionality, or it may simply be wrong.

Analyzing spurious, or false, defects will absorb more time than any other type of defect in your backlog. Not only do you spend time trying to verify that the defect exists, but you also need to validate that the defect is spurious in order to justify it to the testing team and/or client. It’s not enough to declare that the defect doesn’t exist; you must also be confident in stating that the functionality in question is fully correct. You don’t want to be on the receiving end of the oft-quoted cynicism, “It’s not a bug, it’s a feature.”

As a result, false defects are difficult to accurately identify. You need to have a clear set of criteria for falsity. There are three reasons for a defect to be considered false. Meeting any one of these criteria indicates that you have a spurious defect:

  • The behavior explained in the defect doesn’t exist in the application.
  • The assertion about the expected functionality in the defect explanation is incorrect.
  • The details of the defect are duplicated in another defect.

There are a couple of reasons leading to the first criterion. The defect’s creator could’ve misdiagnosed the application while they were using it. Or the application’s expected functionality could’ve changed since the time that the defect was created.

The second criterion will be more common than the first on most brownfield projects. Defects that have been sitting idle for months or years will be fixed without anyone realizing it. Sometimes they’re fixed by proactive developers. Other times they’ll have been fixed by developers too lazy to update the defect-tracking system. It’s also common that the application has organically evolved and the “defective” functionality has either been removed or changed in such a way that it no longer contains the same defect.

The only way that defects in this category will be found is through manual review of the defect backlog. The interesting thing with these defects is that when reviewing them, the client or client proxy interaction is usually quite quick. When presented with the scenario, people with knowledge of the business will quickly realize how incorrect the perceived functionality was. Consequently, the defect is marked as spurious and filed away with no further action needed.

The third criterion should be obvious. If defects aren’t often addressed in a timely manner, it’s inevitable that some duplication exists. But before dismissing a defect as a duplicate of another, be sure it is a duplicate. “Received an error while printing” and “Print output is incorrect” aren’t the same thing.

Regardless of the reasons for a spurious defect, take care to note the logic behind it. Like nonreproducible defects, spurious defects need to have clear and complete reasoning for them. Except in rare cases, you’re probably persisting defects to a system to provide a historical context to your team, the client, or the maintenance team. Spurious defects become valuable if they can clearly state why the application should not work in a certain way. By recording these facts, you’re creating a learning project and in the bigger picture a learning company,[1] meaning that lessons learned are retained for current and future members to build on.

1 Author Jeffrey Liker coined this term in his book The Toyota Way (McGraw-Hill, 2004).

A common type of defect you’ll encounter as you work through your backlog is one that’s actually a feature in defect clothing. Although these defects partially fall into the spurious category, they do deserve their own area of discussion. We’ll talk about these, the most emotionally charged of the defect types, next.

6.4.4. Defects versus features

Ah, the age-old debate. The question of defect versus feature has raged since the first quality review of a software application. In the days since, there have been two sides to the argument. On one side stands the software development team who says, “It’s not a bug because we built the functionality based on the knowledge provided to us at the time.” On the other side are the quality reviewers (testers, QA, clients) who simply state, “It’s not what we want or need and therefore it’s a bug.”

Being in conversations that take either of those tones can be frustrating because both sides are, essentially, right. It’s important to remember at all times that the application is being built to service the needs of the client. If something isn’t working the way they need or want it to, you need to address it. Whether you call it a defect or a feature is moot (see figure 6.6).

Figure 6.6. The question about whether something is a defect or a feature is moot. The work still ends up in the backlog either way.

That said, the difference between bug and feature is important. It’s common for clients to log feature requests as defects in the hope of forcing the project team to address them with immediacy. It’s possible to talk all day on the futility and underhandedness of this practice, but it will occur nonetheless.

Brownfield applications, in particular, are susceptible to features masquerading as defects. The state of the project over time has deteriorated to the point where budget and timeline constraints have forced management to deny requests for new features. In response, the client tries their hand at logging a feature request as a defect, sees a few of them get attention that way, and now has a new technique for getting work done.

The problem that the client doesn’t see is that their feature requests as well as their defects are probably being prioritized at the same level. So defects may or may not get fixed before new features get added, or vice versa. In either case, chances are that the client isn’t getting their most desired work done first.

Having a conversation to discuss the finer details of defect versus feature and the criteria that distinguish one from the other often isn’t going to be fruitful in a brown-field project. This is especially true when there’s a large backlog of defects as well as a client pushing new features in as defects. In our experience, convincing the client of the difference isn’t going to change their opinion about the work they want done. All you’ll do is further feed a belief that the project team isn’t interested in serving the needs of the business. You also run the risk of breeding a lot of developers with Hero Programmer Syndrome (see chapter 1, section 1.3.3).

Instead of discussing the nuances of defects and features, we’ve had success by involving the client in deciding the work priority. Sit down with her and discuss the work that’s in the backlog regardless of whether it’s a defect or feature. Once she has some understanding of the remaining work, get her to prioritize the work effort. Have the client rank each outstanding work item and defect from highest to lowest. Then, simply empower her to choose what the team works on next.


Note

Prioritization won’t be a onetime event on any project. Priorities change, so you should review them on a regular basis. If you haven’t yet reached a point where the team has a strong culture of quality, hold these prioritization meetings frequently. Once you’ve started a good rhythm whittling down the backlog of open defects, and the client has started to believe in the process of fixing defects as soon as they appear, you’ll find that work prioritization will only be required when the team is resetting for another concerted effort.


You’ll derive two significant benefits from this kind of collaborative prioritization. First, the client will realize she can’t have everything all the time. She’ll become aware of the choices that need to be made and that although some tasks will bubble to the top of the work list, others will fall to the bottom through her valuation of each one.


Challenge your assumptions: Numerically incrementing priorities

Traditionally defects have been assigned priorities from a list of values that may include High, Medium, Low, and any number of others. When prioritizing, it’s common for defect owners to want their defects to be assigned the highest level of priority possible. The result is a list of defects that are all listed as Urgent or High. Once priorities are assigned in this way, the prioritization values have become meaningless. What’s the most important defect from a list of 50 that have been categorized as High?

A useful technique we’ve used in the past is to assign defects a numeric priority. If you have 50 defects, assign each one a number between 1 and 50 and don’t allow any of the numbers to be reused. Then, each defect is appropriately ranked with respect to the others.

At times this type of prioritization will require hard decisions. When two defects appear to be of equal importance, how do you assign them to different levels? Don’t give in to well-intended deviations from the no-repeated-values rule—they lead straight back to a situation where all items are High priority. Instead, take the time to make a decision on the appropriate value. Ask yourself, “If the product absolutely had to be delivered with only one of these items being addressed, which would it be?” (Hint: the answer can’t be “both.”)


The second benefit of this style of collaborative prioritization is that the client will become more closely integrated into the planning effort for the project. Over time, she’ll see and understand the ramifications of her requests and the project team will come to respect those requests much more.

As the client and the team work on prioritization, you can then start to involve the client in your estimating process. A client who understands the project’s prioritization process, plus the work effort estimation that happens, will become a great ally when the project has to make hard decisions about scheduling.

In the end, though, regardless of what tools or techniques you employ in the defect-versus-feature battle, remember the key point that project teams are there to create software that meets the needs of the client. If a request is presented as either a defect or a feature, and the business does need the request filled, you have an obligation to address that business need in some fashion.

We’ve talked a great deal about reviewing existing defect entries and how to address them. Now let’s outline the components of a good defect entry.

6.5. Anatomy of a defect entry

In section 6.2 we touched on the fact that a defect may not be the epitome of clarity or completeness. We’ve found, in our experience, that these are a common problem on many projects, brownfield or not. Well-meaning team members will create a defect that doesn’t contain enough context or detailed information that the next person reading it can understand the intended idea. As a result, defects make too many trips between people as clarification is added. Usually the need for frequent clarification isn’t a technical problem; it’s a communication issue. Team members aren’t communicating well with one another.


Challenge your assumptions: Fear not the face-to-face conversation

Many developers prefer disconnected discussion when communicating. Our comfort zone tends to be one where we can craft our thoughts carefully before responding. Email is a fantastic example of this disconnect. So is adding questions or requests for clarification into a defect-tracking system rather than taking a walk to someone else’s office to ask them directly.

Performing conversations in person or over the phone will be rewarding both personally and professionally. Face-to-face conversations provide all parties with the opportunity to clarify their impressions on the subject matter. It gives all parties the chance to more deeply explore aspects of the discussion that may lead to more meaningful information in the end. On top of that, face-to-face conversations usually take a fraction of the time that the write-reply-reply-reply... pattern provided through email does.


We’ve all seen defects where all that’s recorded is a summary similar to “Customer Edit screen doesn’t work.” For the consumers of this defect, the problem isn’t stated in a clear enough manner that they can effectively begin to diagnose any problem. Perhaps the only thing that they can do is open the Customer Edit screen to see if it displays an obvious error. For all they know, the end user could be claiming that the screen should have a label that reads “Customer #” instead of the value “Customer No.”. Regardless, defects with too little information lead to inefficiencies and, ultimately, project risk.

Figure 6.7 outlines the characteristics of an ideal defect (as it were).

Figure 6.7. Concise, clear, complete, constructive, and, ultimately, closed are the traits of a good defect entry.

Although the traits outlined in figure 6.7 may seem simple, it’s amazing how rarely these values are found in a logged defect. Let’s look at each of these traits in detail in the hope that you can improve the quality of defects entered in your system.

6.5.1. Concise

Conciseness generally applies only to the defect summary. In most cases, this is the only part of the defect you’ll see in defect reports. For that reason alone, defect summaries should be concise.

Being concise will be vitally important on a brownfield project that has hundreds of defects. You’ll be referring to the defect list regularly both in reports and in conversation. A concise summary can become a good mnemonic for the defect when you discuss it with other developers.

Given that, rewriting defect summaries en masse after the fact has limited value. As with many tasks you could do when reviving a brownfield project, rewriting summaries is nothing more than a make-work project. If you plan to refer back to them often in the future, there may be enough long-term value in doing this work. You’ll need to gauge the needs, timelines, and resources available on the project before taking on a piece of work requiring this level of effort.

6.5.2. Clear

Clarity tends to conflict with being concise. In brownfield applications, it’s not uncommon to find defects that have summaries consisting of little more than a statement like “Tab order” or “Save broken.” Although such statements may be concise, they’re far from clear.

Instead of a summary that has limited informational value, try increasing the summary to be a simple statement outlining the scenario. A summary of “Tab order” would be clearer if it read “Edit Customer tab order fails leaving Address Line 1.” When you look at the summary, you understand the situation surrounding the defect. Better yet, it’s a more effective mnemonic for developers who are quickly scanning the defect list on a regular basis.

Furthermore, when you fill out the details for a defect, be sure to remove any ambiguity. Now is not the time to practice your shorthand. This rule applies when you’re entering details on how to reproduce it and when you’re expanding on how you fixed it as well. Entering text like “Clicking Save doesn’t work” isn’t useful if a Save button, menu item, and toolbar icon all appear on the same screen.

6.5.3. Complete

The phrase “The devil is in the details” seems tailor-made for defect reports. The details are the core of any defect. Nothing is more frustrating than opening a defect report and finding either incomplete information or no details at all.

Working with defects doesn’t have to be futile because of poor defect details. There’s also no point in directing blame to people because everyone has entered a poor defect detail at least once on a project. On a brownfield project, there’s the very real possibility that the person who entered the defect is no longer part of the project team, and the ones who are around possibly can’t remember the details because the defect is so stale. Regardless, defects lacking necessary detail need to be supplemented before they’re resolved.

Probably the single most important piece of information that needs to be included in all defect details is guidance on reproducing the defect. Depending on the complexity of the defect scenario, this guidance may range from something as simple as “Open the Customer Edit form” to a complicated set of steps that outlines the ancillary data that’s required, the steps to first put the system into a specific state, and then the steps required to generate the defect. Regardless of the complexity, we can’t stress enough the value this information provides. Without this information, the people responsible for fixing and retesting the defect will be guessing and not able to do their jobs adequately.

In combination with step-by-step detail for reproducing the defect, it’s also important to include a clear description of the defective behavior observed at the time it was generated. Once someone has gone through the process of reproducing the defect, there must be some way for them to distinguish what it was that triggered the defect’s creation. In some situations, this information may be as simple as “Error dialog is generated.” In others, it could be some incorrect interpretation of the business rule, such as “Selecting Canada as the country doesn’t change the list of states to a list of provinces.”

As a corollary to outlining the incorrect behavior, it’s also useful to include a description of what should have happened when you reproduce those steps. This level of detail isn’t always necessary. (“Application should not crash” doesn’t add much value to a defect.) But whenever there’s room for interpretation, try to remove as much ambiguity as you can. Pointing out how it failed is only half the defect. Describing how it should succeed is the other half.

Enhance the description of the false behavior with whatever additional information is available. Screen shots can be invaluable. If an error was generated, gather the error message details, stack trace, log entries, and anything else that may be generated when it was thrown. If entering certain inputs causes incorrect data to be displayed, collect screen shots of both the inputs and the resulting incorrect data. Any information that can be provided to bolster the claim of false behavior makes resolution quicker and clearer.

Once the defect has been resolved, details of the decision or action taken should be entered. Don’t go crazy and start cutting and pasting code into the defect. A brief explanation that says, “Added support for saving to PDF” will let the testers know what to look for when verifying that it’s been fixed. If done correctly, the details can also provide feedback to testers on what other areas of the application may have been affected by the resolution.

Also, if the defect wasn’t acted on, be sure to include an explanation. Preferably you’d include something more substantial than “Didn’t feel like it,” but a resolution like “Discussed with the client and they have a workaround. Was decided that the cost of fixing would be prohibitively high at this point” makes it clear that you’ve taken steps to address the defect.

By the end of at defect’s life, it will have the following components in its detail section:

  • Steps to reproduce the scenario
  • Data and supplemental files needed to reproduce it
  • Description of the incorrect behavior that was observed
  • Description of the correct behavior that was expected
  • Summary of the actions taken when resolving the defect

Although there are other pieces of information that you may want or need to track in defect details for your project, these are the core components. Without them, the efficiency of the team will decline. Their absence will also create a future where older defects are meaningless when reviewed.

6.5.4. Constructive

Carrying on our analysis of a defect entry from figure 6.7, a defect report is useful for more than just a record of bugs and how you solved them. Defects can provide valuable insight long after they’re closed. This is true only if you can glean that insight from them in bulk. Typically, you’d provide the ability to review defects in bulk by individually classifying them.

Defect classification is nothing more than a way to group or categorize similar defects together. There are a number of ways that you can classify defects. It’s fairly common to classify based on severity (ranging from “minor UI issue” to “crashes the host computer”) and the area in the application that the defect was found.

If you’re looking to implement defect classification, first figure out what information your project needs or would benefit from. Just because other projects classify based on application area doesn’t mean that the same information would be helpful on your project.

Developers on a brownfield project should be interested in classification on past defects and certainly on all new ones. Summarized classification data can be used to gain insight into problem areas of the application, code fragility, code churn, and defect resolution success rates.


Recognizing code churn

Code churn is a count of the number of times that lines of code have changed. It can be determined by looking at the number of lines of code that have changed in an area (method, class, etc.) of code. It’s also possible to calculate code churn by counting the number of times that an area of code has changed.

The metric will indicate whether an area of code is highly volatile and thus more prone to defects being generated. As with any static analysis metric, numbers that indicate an acceptable threshold are gray and, in this case, depend on the size of the code area that is being observed.

We suggest that you take code churn as an indicator that further and deeper investigation is required instead of as a sign that action absolutely must occur. In some projects, high code churn in areas may be acceptable although in other situations, it may not. Rather than giving you hard values to strive for, we suggest that you go by intuition and not use code churn automatically as a metric to trigger immediate action.


Unlike with defect summaries, back-filling defect classifications can be valuable. The categories and tags you assign to a defect can (and should) be reported against to help guide your effort and mark your progress. We’ll talk more about defect reporting in section 6.7.

6.5.5. Closed

This portion of a defect is an obvious one but worth including in our discussion. Always remember that your goal is to (eventually) close the defect in some manner. Most often, closing a defect is done by fixing it. We’ve talked about other reasons it could be closed as well: it’s a duplicate, it’s incorrect, it isn’t reproducible, and so forth.

Very often in brownfield applications, the goal for many developers isn’t to close the defect so much as it is to “get it off my plate.” A common tactic is to focus on one small point of clarification so that you can reassign it to the testing team. After all, if it’s not assigned to me, it’s not my problem, right?

This is small thinking. In the next section, we’ll talk about using the defect list to instill a culture of quality in your team. Without that culture, defects can constantly be juggled back and forth between the client, the development team, and the QA team without any real work being performed on it.


The deferred defect

Another type of defect that often lingers in a backlog is one that is deferred. A defect may be deferred for many reasons. The work to fix it may not be proportionate to the benefit. You may be waiting on the purchase of a new third-party product that addresses it. It may be a feature in defect’s clothing that’s on the radar for a future version of the application.

Whatever the reason, get these off your defect backlog somehow. With most tracking systems, you can set its status to deferred or something similar. This type of status assignment is acceptable only if you can easily filter them out whenever you generate a list of open defects. It’s cumbersome to scan through a list of backlog defects and always have to mentally skip over the deferred ones.

If your tracking system can’t filter easily, move the defect to another medium, even if it’s a spreadsheet called DeferredItems.xls.


The desire to simply rid yourself of assigned defects needs to stop. Any open defect in the system should be considered a point of failure by the entire team. You must make every effort to resolve a defect as quickly (and correctly) as possible. Instead of punting it over the wall to the QA team to find out what they mean by “The status bar is too small,” try a more drastic tactic: talk to them. Get the clarification you need by phone or in person. Don’t let the defect linger so you can beef up your progress report by saying “Addressed 14 defects” where 6 of them were reassigned for clarification.

On the surface, a defect entry seems a simple thing. But as we’ve shown here, there’s more to it than you might think at first glance. Defect entries that aren’t concise, clear, complete, and constructive will take longer to move to their ultimate desired state: closed.

Creating a mind-set that values the importance and existence of defects is one of the signs of a mature development team. It takes great humility to publicly accept that you erred. It takes even more maturity to readily act to rectify that error. Now is a great time to look deeper into this kind of team mind-set.

6.6. Instilling a culture of quality

You’re at a unique point in your brownfield development cycle. You’ve consciously recognized that improvement is necessary and (provided you’ve read chapters 15) you’ve made some effort to do so. You’ve also started a concerted effort to knock down your defect backlog, and in the coming chapters, you’ll also get deeper into pure development topics.

In some ways, you’re starting anew with your development effort. The pragmatist in you knows that although you’ve been working on your ecosystem, you haven’t stopped all development on the codebase. But nonetheless, you’re standing at a crossroad.

One path is well trod. It’s the one that will lead you down a road you’ve traveled before. You’ll take some detours, but for the most part, it’s the way you’ve always done things and it will inevitably lead you back to this same crossroad.

The other path may be unknown but is fraught with possibility. Unfortunately it’s a much less-traveled path and it’s full of obstacles that will try to get you to veer from it. It’s the path leading you to a culture of quality.

Hmmm...a little more evangelical than we had in mind but the sentiment remains. Rather than continuing on down the path that led you to brownfield territory, you should start thinking, as a team, about your commitment to quality.

No one is suggesting that you’re not committed to quality now. But are you? Can you honestly say you have never compromised in your code just to meet a deadline? Have you not once looked at some questionable piece of code that another developer (or even you) has written and decided, “I can’t deal with this now. I’m going to leave it as is and make a note of it for future reference”? Have you not, after trying several times to get clarification from the client for some feature, just given up and forged ahead based on assumptions, hoping no one would notice?

Of course you have. So have we. You die a little on the inside when you “have” to do these things, but it always seems like the only choice at the time, given the remaining work.

But because you’re embarking on a fairly major undertaking to improve your codebase, make some room in your schedule to ingrain and reinforce your commitment to the quality of your application.


Note

The entire project team, not just the developers, has to be part of the commitment to quality. If the development team members are the only ones not willing to compromise, they’ll quickly become a pariah of your organization through their incessant nagging.


What we mean by committing to a base level of quality is this:

  • You’ll always do things to the best of your ability.
  • You won’t stand for workarounds or even defects in your code.
  • You’ll actively and continuously seek out better ways of improving your codebase.

Note that this isn’t a ceremonial commitment. It requires dedication from every member of the project team, from developers, to testers, to management, to clients (see figure 6.8). Each one of these people has to commit to quality at a level where any person on the team will gladly help to answer questions on an outstanding defect or feature at any point in time. If this communication breaks down at any level, plan for much slower defect resolution and lower quality overall.

Figure 6.8. Remember that the development team is only one part of the overall project team.

Let’s tie commitment to quality back to our discussion on defects. One key way to initiate your commitment to quality is to work toward a policy of Zero Defect Count, which we’ll talk about next.

6.6.1. Zero Defect Count

Most project teams submit to the idea that there will always be defects. They believe that defects are an ever-present cost of development.

But there’s a problem with this mind-set. If you believe there will always be at least one defect, then what’s the problem with there being two? Or five? Or twenty? Or five hundred? By accepting that defects are inevitable, you also accept that they’ll accumulate and, more importantly, that they can be ignored for periods of time. Every so often, such a project will switch into defect-fixing mode and the team is tasked with working down the open defect list as quickly as they can.

Some teams have reasonable control of this phenomenon. Other teams lack the discipline necessary to minimize the impact of the accepted open defect list. Those projects end up with open defect counts in the hundreds or thousands. Usually they have no solid plan to resolve the issues beyond “We’ll fix them before we release.” Although this type of posturing may placate some people in management, others will see the amount of debt that the project is accumulating on a daily basis. Like financial debt, defect debt is a liability and liabilities have risk associated with them. A high or increasing number of open defects adds to the overall risk that the project will fail.

If open defects are a risk, each one resolved is a reduction in risk. Ideally you’d like to have mitigated all the risk associated with defects. To do that, you must have zero open defects.

Now, when you read “zero open defects,” you’re probably thinking it’s an idealist’s dream. And on projects that have a traditional mentality, where it’s okay to carry open defects, you’d be correct. But if you change that mentality so that carrying open defects isn’t acceptable, it becomes a more realistic possibility.

Zero Defect Count may seem like an impossible goal to set, but it isn’t when the team strives for it on a daily basis. We’ve been in meetings with management where we’ve stated that our teams will work under the one overriding guideline of zero defects. And in those meetings we’ve been laughed at by every attending manager.

Their skepticism is rooted in their belief that this guideline means the team will turn out software with zero defects on the first try. We know release software without defects isn’t possible. Achieving zero defects isn’t about writing perfect code the first time. It’s about making quality the foremost concern for the team. If code is released and defects are reported, the team immediately begins to work on resolving those defects. There’s no concept of waiting to fix defects at some later point in time. Defects are the highest-priority work item on any developer’s task list.

Attending to defects immediately doesn’t necessarily mean that all work stops on the spot while the defect is investigated. The defect can wait until a developer has reached a comfortable point to switch their working context. Ideally, the developer can complete their current task entirely before working on a defect. Realistically, they’ll work on that current task to a point where they can confidently check in their code (see section 2.5.1 in chapter 2 about avoiding feature-based check-ins) and then quickly transition to defect resolution. As a result, defects may not be addressed for a few hours and possibly a day, but preferably no longer than that. If defects haven’t been attended to after half a week, the culture of quality on the team needs improvement and reinforcement.

On a brownfield project, a number of preexisting issues must be resolved before Zero Defect Count can be attained and sustained. The first, and probably the most obvious, is the current defect backlog. In section 6.4, you read about techniques and realities of working through a backlog. Until you’ve eliminated the defect backlog, there’s no chance of attaining Zero Defect Count. Once you’ve eliminated the backlog, the project is at a point where it can begin to build a Zero Defect Count culture. Hopefully, by that time you’ve instilled a culture of quality in the team so that the concept isn’t so far-fetched.

Achieving Zero Defect Count

Like the growth of any cultural phenomenon, a culture of Zero Defect Count takes time to develop. Developers are geared toward driving to work on, and complete, new features and tasks. Defect resolution often subconsciously takes a back seat in a developer’s mind. The best way we’ve found to break this habit is to have champions on your side: team members (leads or peers) who are adamant that defects need immediate attention. But having team members who proclaim the immediacy of defects and then go off and resolve all the defects themselves are of no use when creating the team culture. The entire team needs to be pushed to collectively resolve defects in a timely manner.

A simple and effective method to create collective urgency is to hold daily standup meetings specifically for defects. These meetings should be quick (unless your project is generating a daily defect count that’s significantly higher than the number of team members) and requires nothing more than a brief update by each developer on what defects they’re currently attending to. Our experience shows that meetings of this nature are needed only for a short time and defect reviews can then be rolled into the standard daily standup meeting. The purpose of the specific standup for defects is to reinforce the importance of defects to all team members at the same time.

Note that this technique is most effective with a team that has a relatively strong sense of self-organization. For teams that don’t have that self-organizing instinct, guidance from the top will be in order. Having a single defect supervisor is one approach to solving the problem of a reluctant team. The person in charge is responsible for ensuring that the defects are being addressed at an overall team level or is tasked with assigning defects out to specific individuals and verifying that they were resolved. Both options can work, but it will take a strong person who is well respected by his or her peers to succeed in either. Our experience has shown that it’s this type of team that’s the most difficult to convince and convert. Accomplishing a culture of Zero Defect Count in this situation is going to require time and patience.


Tales from the trenches: Zero Defect Count in action

In section 6.3 we told you about our experiences with a worst-case scenario. We mentioned that we transitioned the project into one that lived by the Zero Defect Count principle. We won’t kid you: making this transition wasn’t easy to achieve on that brownfield project. The culture of the project, and the corporation itself, was that software development proceeded until it was determined that enough features were available for a production release, which was then followed by a “stabilization period” in which as many defects as possible would be resolved prior to the release. Our desire to change this mentality to one of Zero Defect Count sent ripples through management quickly.

Having seen the problems inherent in the big bang approach to stabilization, we knew there was no other way to turn around the client’s opinion of the team—which wasn’t a very high one given how the defect count grew. Rather than taking the bureaucratically correct steps of proposing the change to Zero Defect Count, we decided a more dictatorial approach was required for such a drastic change in culture for developers and testers. So we stated in our release wrap-up meeting that we were going to move the team to a culture of Zero Defect Count. Management scoffed, developers were stunned, and testers aged almost instantly. Once we explained the premise and the approach we were to take, we at least got buy-in from the testers. Management remained skeptical until we’d shown many consecutive months of success.

The key for success was getting the development team to commit to fixing defects as quickly as they appeared in the tracking system. Naturally, developers would much rather be working on new feature development than defect resolution. So we made it the responsibility of the developer leads to ensure that defects were assigned to developers between each new feature task that they were assigned. Because we had a large team of developers, we never asked a single developer to completely clear the current defect backlog because we needed to ensure, for morale reasons, that everyone on the team was pulling their weight on defect resolution.

In addition to ensuring an equitable distribution of defect work, we denied attempts by developers to pass a defect off to “the person who originally wrote that part of the system.” We prevented defect pass-offs for a couple of reasons. First, collective ownership is a powerful concept when trying to achieve extremely high levels of quality. Second, we wanted to instill a culture where resolving defects was exponentially more important than assigning blame for creating them.

After a few months, we’d changed the culture of the entire development team to one where Zero Defect Count was as important as meeting our deadlines. Developers wouldn’t look for new feature work until they could prove to the developer leads that they’d resolved some defects. Management of the Zero Defect Count policy disappeared and it very much became a culture.

As we said, this transition wasn’t easy. Developers griped about having to fix defects for quite some time. Like any cultural change, time and persistence paid off in the end.


Zero Defect Count is a part of an overall culture. It’s a decree that a team must live and die by. When they do, zero defects will seem the norm and your project will feel much healthier. It’s by no means an easy task to achieve, but the rewards, in terms of both project and psychological success, can be amazing. A team that works in a Zero Defect Count environment has more confidence and a penchant for quality that bleeds into other areas of its work.

One of the obstacles you’ll face in moving to Zero Defect Count (and in moving to a culture of quality in general) is skepticism from the stakeholders, especially if the defect backlog is particularly large. Many of them will have heard this song before and will be tough to convince. In the next section, we’ll see how we can address that.

6.6.2. Easing fears and concerns from stakeholders

Working through the open defects on a brownfield project can be an arduous and lengthy process. Many project members may perceive it as a giant time sink and won’t buy into the process. There will be scoffing at the beginning. Clients may wonder why attention to quality hasn’t happened in the past. Project managers may be concerned it will affect deadlines and morale. A common lingering concern is that after all the effort expended changing the culture and fixing defects, the project will be back in the same situation it’s in now, with a large open defect backlog. Some team members may have even gone through more than one “defect bash” in the past.

Addressing this fatalism starts with a mind-set change. You must appeal to your newly defined commitment to quality. And you must do it every single chance you get. Ignore the skepticism and stay focused on the larger goal. Most projects have either explicit or implied levels of quality for them to be deemed a success. Follow this mantra: a large backlog of open defects is a huge risk to that success. If you can convince yourself and your team that this risk is large and does exist, the rest will eventually come.

Defects should be considered a personal affront to your team—like your code has physically thrown up on your keyboard and the only way to clean it is to resolve it in such a way that it never comes back.

When you treat defects like that, the stakeholders will recognize your commitment to quality. In our experience, clients respond well when you take defects seriously. They get the sense that you’re committed to giving them the highest-quality product you can.

Similarly, project managers are usually quick to see the link between quality as a metric and project success. But you may have trouble convincing your manager to allow defects to take such a high priority in your task list. If so, convince your manager to let you try the process for at least 3 months (preferably 6). Even if there’s no substantial improvement in that time, at least the seeds have been sown that quality is something you’re thinking about. But by the same token, if there’s no substantial improvement in 3 months you have to consider that maybe a more drastic approach is necessary, such as a change in the project team’s makeup.

Also, be aware that reviewing, triaging, and replicating historical defects has a high per-defect resource cost. Addressing historical defects takes more effort than addressing them as soon as they’re reported—which means as your backlog drops, and as you become more adept at the defect-resolution process, it becomes much easier to maintain a base level of quality as you go.

With all this talk of how to manage defects, it makes sense to discuss how a tracking system fits into the mix.

6.7. Working with a defect-tracking system

Throughout this chapter, we’ve talked about defect tracking, the anatomy of a good defect, the process to dig a project out from under the weight of a defect backlog, and other things. In all of that, there hasn’t been any time spent talking about the tools that you can use to manage defects. Defect-tracking systems have been mentioned, but not in specific detail.

If you were hoping to get great insight into the one killer defect-tracking system that will bring peace, harmony, and glory to your project, you’re reading the wrong book. We’re not going to discuss the specifics of any defect-tracking systems that are on the market partially because we’re still on the hunt for that divine defect tracker ourselves. Instead, our focus is going to be on the tool’s interaction with your team and project.

It seems that every project has some sense that they need to be logging and tracking defects. Unfortunately, defect tracking is often treated as a second-class citizen. It’s a necessary evil in the software development process. We’ve regularly seen projects that have defect-tracking systems that were obviously implemented with little or no thought to the purpose they fulfill for the project. They were there because every project needs bug-tracking software.

Most companies would never implement business-critical software without due analysis or careful thought. You know that the by-the-seat-of-your-pants acquisition and implementation plan is rarely successful, yet this approach is still done with defect-tracking systems.

The results are pretty obvious on a brownfield project. When you ask about certain areas of the software, people respond with statements like “We never use that.” Worse, there are projects where the defect-tracking software has been completely abandoned because the project team couldn’t work with it without imposing significant overhead to the process. If any of those situations apply, it’s fairly obvious that the system was implemented without first knowing what the project team needed or how they worked.

Here’s one example: if the project doesn’t have the concept of a module, don’t impose one onto the team simply because the defect software requires it. It would be a needless classification that would allow for careless or sloppy entries in the system. Additionally, you’ve imposed another step in the process of the defect creator. Instead of seamlessly moving from one required piece of data to another, defect creators have to know whether they should skip this one specific data entry point.


Warning

A well-configured defect-tracking system will work seamlessly in your process and won’t be a source of friction (though its contents might). You can accomplish this type of process integration through well-planned and carefully researched implementation of a tool.


Even after you’ve implemented a defect-tracking system that works well within your process, you may have to do some convincing to get everyone to use it. Brownfield projects may have people who have been worn down by the workflow required in the past. You’ll need to get them on board to make the system work.

If a project team has started to grow any amount of team-based code ownership and quality standards, your argument will be easier. Don’t focus on the fact that the tracking system is where defects are found. Instead, point out how the system can be used to better analyze problem areas within the application. Show your team how reporting from a defect-tracking system can improve their ability to remedy areas of questionable quality.

Implementing a defect-tracking system that collects meaningful and relevant information is only half of the workflow you require. Simply entering data into a system doesn’t make it useful to the team. The use comes from your ability to retrieve that information in a way that adds value to the project team. It’s at this point you should start to think of how to report on that data.


Challenge your assumptions: Do you really need defect tracking?

The common assumption on software projects is that a software-based defect-tracking system is mandatory. Is it really, though? On a brownfield project, you may have no choice but to use a software system for tracking defects. The size of the backlog may dictate some way to supplement the collective memory of the team.

But if you’ve inherited a small defect backlog, or if the team has worked its way out of the defect backlog, perhaps you don’t need a software system. If you’re working in a Zero Defect Count environment and have open lines of communication, maybe email is enough to track defects.

It’s possible to run an entire software development project, with high levels of technical and business-related difficulty, without formal defect-tracking software. We’ve done it successfully in the past. Although arguments can be made that risk, pain, and friction will appear without a system, consider that those arguments may derive from speculation based on experience from past projects. Always determine if the recommendation to drop defect tracking will work for your specific project. Sometimes it will; sometimes it won’t.


6.8. Defect reporting

Everyone who opens a defect-tracking system reads reports filled with defect data. The report may be as simple, and informal, as an Open Defect Listing filtered by your username. Defect-tracking systems, like any system having data input, are not overly useful if the data can’t be viewed and aggregated in a meaningful way.

Reporting on defects is much more useful to the development effort than printing out a list of the open defects. On brownfield projects, there will usually be existing data in the defect-tracking system. You may have worked through reviewing and resolving a defect backlog. Now that you have all that information, what can you do with it?

We’ve already mentioned the most common daily report, Open Defects Listing for a User. This list is where developers will go to select work from their work queue. If you’ve been able to instill a strong sense of team code ownership, the report may not even be filtered by the developer’s name. Instead, developers will be running an Open Defects Listing for the entire project and picking items from it to work on.

For brownfield projects, you can glean some important information from your defect list through trending data.

6.8.1. Trending data

When you’re reviewing reports, it’s common to get trapped in the current-point-in-time details of the data. Although looking at the specifics can provide good information to the reader, trending data tends to be more informative on brownfield projects. The importance of trending data is especially high when you’re first starting on a project where there’s historical data.

Trending of defect data isn’t difficult to do provided you have sufficient useful data. Most trend-based reporting that we’ve used is possible with the standard data that most projects collect on defect entries. Two particular pieces of data are worth further discussion: module and version number.

Trending By Module

Having defects categorized by module (application area) is a common practice. The module category is a quick and easy way for the person attempting resolution to determine where to start in their work. It also provides a method for the project team to determine which modules need the most immediate attention based on the current number of open defects in each module. But these two options aren’t the only useful pieces of data that you can get when trending by module.

When talking about trending based on associated module, we mean observing the explicit change and rate of change over time in the number of defects assigned to a module. You monitor the number of defects in a module at set intervals and note the difference at each interval. You may also want to generate these reports for new defects only, defects from all time, or defects that remain open at the end of the interval.

Figure 6.9 shows a simple graph of the number of defects per module over time.

Figure 6.9. A sample graph showing number of defects per module over time. Note that the increasing defect count in the A/P module indicates it could use some attention.

In all of these reports, the information you’ll get tells you how the project is changing. A steady increase in the number of defects in a module over time may signal the code is becoming too difficult to change or maintain.

Another piece of trend data that can provide valuable insight is the average number of times that a defect is returned as not resolved. If there’s a trend upward on this value, it may be a sign that the project team is slipping on its commitment to quality and Zero Defect Count. It could also be a sign that the development team isn’t communicating with the client very well.

Trending reports seem rarely to be included out of the box with many defect-tracking applications. In fact, in some cases you have to supplement the system with custom reporting. Other times, you may be just as well served by a gut feeling of the trend based on a regular review of a snapshot report.

In the end, trending of defect data doesn’t come with a hard-and-fast set of guidelines of the actions you should take. This information should be used to support decision making, but not to drive it. Just because there was an increase in new defects logged on a certain module, it doesn’t necessarily mean that the module requires significant rework. It may just be an indicator that all developers were actively working in that area of the application during that time.

The final use of reporting in defect-tracking systems that we’re going to cover concerns software releases.

6.8.2. Release notes

Depending on the project, there may be a need to provide information about the changes to the software when it was released. Commonly known as release notes, these can be valuable at the start of a brownfield project to show progress to your client and provide a starting point for your testing team.

One of the main sections in your release notes should be a list of resolved defects. If you have to generate this list by hand, it can be a hassle. So it makes sense to leverage the defect-tracking system to create a list of these resolved issues.

Unfortunately, this effort almost always adds some form of overhead to your process. To generate a report of resolved defects for a release, you must link defects to a release number. In most cases, you’ll need to add another field to the defect report that the resolver must enter when updating it.

Although this change may seem trivial, each additional data entry point will increase the time spent on the defect. People will forget to enter the value, or they’ll enter the wrong value, and they’ll do it on a regular basis. It may not be frequent, but it will happen. Usually, the best you can hope for is that it doesn’t happen often enough to warrant any special action. Depending on the capabilities of your tracking system, try to make it as easy as possible for users to enter this information correctly. If a few mistakes slip by, it’s not the end of the world.

Because this type of reporting requires data entry on each defect, we don’t recommend that you try to backfill the existing entries in the defect-tracking system. First, it would be a tedious and monotonous job that we wouldn’t wish on even the most annoying intern. Second, you won’t know what value to put in for any historical defect without doing a lot of research. That time is better spent elsewhere.

In any case, historical release notes provide little value. The releases have already been sent out to the client. There should be no desire to resend the release just so that it includes release notes. Most likely the release notes for past releases will never be created, and certainly they’ll never be used by the client. If you need, or want, to create release notes on your brownfield project, do it for future releases.

There you have it. Reporting on your defect-tracking system can provide some benefits to your project. If you’re a developer looking to improve code, don’t forget to trend the defects by module and release. When you have the opportunity, use your defect-tracking system to generate information that will be sent to the client. Traceability back to your system when a client is asking questions is a great asset in convincing the client that you’re on top of the situation.

6.9. Summary

Defect tracking is an oft-maligned part of projects. Management demands it as a necessity. Systems are installed and turned on without any thought to how they integrate with the project team or what process they’re using. Everyone complains that entering defects sucks and then they proceed to perpetuate the problem by creating entries that are either incorrect or incomplete. The development team despises it because they end up having to do detailed research to supplement the entry just so that they can begin resolving the issue. Once defects are resolved, they’re forgotten as relics of the past.

We’ve seen all of this occur on projects and none of it has to. With some thought and analysis, defect-tracking systems can be configured to work effectively for a project and still provide rich information to the project team, in both the present and the future. Analysis shouldn’t be limited to the configuration of an already chosen system. The project team should take the time to determine if the project’s practices and procedures even require a defect-tracking system. Sometimes nothing may be needed. Don’t fight it. If it works, let it happen.

If a brownfield project does have a defect-tracking system, be wary of the backlog of open defects. Be wary in the sense that you can’t avoid them and that the team will be addressing them sooner or later. Because backlog defects are inherently stale, working the backlog sooner will be your best option.

Work that backlog until it’s completely cleared and then maintain a policy of Zero Defect Count. Succeeding with that policy will, at first, take effort both to work the incoming defects and to put the codebase into a state of higher quality. Work to make Zero Defect Count a cultural force on the project. If quality is naturally high because of the way the team works, it’s one less thing that needs to be managed.

No matter the effort put into the quality of a system, defects will occur. Having the team address defects immediately will make their resolution much easier. Some of those defects may be feature requests masquerading as defects. This type of feature/ defect isn’t a problem per se, but it can affect the scheduling of work on the project. If you catch feature requests that appear in this manner, don’t turn them down outright. Instead, put the power in the hands of the client. Have them prioritize the feature request against all the other outstanding work and defects. Not only will this type of prioritization ensure the requests aren’t lost, but at the same time the instigator or client will feel that their request has been attended to.

Brownfield applications that have existing defect-tracking systems can be a trove of analytical data. Don’t forget to use it to track information and trends in the past development effort. It will guide you to places that may need attention to prevent future catastrophic meltdowns.

Don’t treat defect tracking on a project as a second-class citizen. Embrace it as an area to improve, like so many other things on a brownfield project. It will take time, but the end goal of an efficient and effective project can’t be achieved without finding and resolving the friction points in the defect-tracking system.

Thus ends our journey into your project’s ecosystem. Starting with the next chapter, we’ll look at your application’s code, beginning with a topic near and dear to our hearts: object-oriented fundamentals.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset