Chapter 13
Maintainability

Having sporadically slept overnight on the harrowing bus ride from Cusco, Peru, I arrived in Puno on the western coast of Lake Titicaca at daybreak. Yet another stolid bus station, weary children asleep on their parents, the wide-eyed vigilance of some guarding their luggage, the apathy of others content to sleep blissfully on the cold, cement floor.

Still half asleep, massaging the seemingly permanent creases in my arms from having bear-hugged my backpack all night long, I stumbled through the bus terminal, physically exhausted from sleep. My legs ached, but I couldn't tell if that was due to my cramped sleeping quarters or the jaunt up Huayna Picchu two days earlier, consummating a four-day trek across the Inca Trail. I'd had porters then—what I wouldn't have given to have had one of those porters carrying my rucksack in Puno.

I was in search of relaxation, but somehow the throng of overnighters strewn about just made me wearier. In 14 hours I had a date with a different bus for yet another overnight adventure down the perilous mountainside to the Spanish colonial city of Arequipa. In other words, given 14 hours to live, how would you want to spend it?

The Inca Lake tour company stood nearby, and a subtle glance toward the orchidaceous display was enough to summon shouts from the attendant. Within minutes, I was in a cab headed to the dock to board a water taxi for a day of exploring Lake Titicaca.

Outstretched on the hull and having been lulled to sleep within seconds of departing, I awoke to shadows and shouts as tourists clambered over me snapping pictures. Ah, there must be something to look at, I thought, as I craned to see we'd arrived at the island of Uros.

Stepping out of the boat, I immediately sank several inches, as though wading through biscuit dough. Everything in sight was constructed entirely of the totora reeds. Their small sailboats: reeds, bound with reeds, having reed masts. Their thatched houses: reed floors, walls, and roofs. Even the communal island kitchen: reeds surrounding a stone fireplace where the standard fare of tortilla, beans, and rice was cooking. And thank God, because I was starving!

Renowned for constructing floating reed islands, the Uru people have inhabited the lake for centuries, once using the islands as defensive bastions. The Uru haven't lived on the islands for decades—despite what Wikipedia may tell you—but now reside in Puno and neighboring pueblos. Notwithstanding, their ability to maintain their culture for centuries is astounding.

The Uros islands are impeccably and constantly maintained. They are several feet thick, and like icebergs, the majority of the island and reeds lie unseen beneath the water, slowly disintegrating and continually being absorbed into the lake. The Uru combat this slow death by adding layers of new reeds to the top of the island every few weeks, ensuring their villages remain afloat.

Centuries of honing island maintenance has instilled precision in the Uru, and every aspect of island maintenance is planned and choreographed. For example, rather than hastily adding water-logged reeds to fortify islands, the reeds are dried for weeks, allowing them to harden and become moisture-resistant.

After they are dried, small bushels of reeds weighing 10 to 15 pounds are tightly bound together before being added to the top layer of the island. This modular practice builds smaller bundles that are more impenetrable to moisture, can bear more weight, and represent strong, cohesive units.

As I finished my Titicaca tacos—somehow, I was the only one to have cajoled lunch from the Uru—the guide concluded his scintillating rendition of island life and preservation. Taking a fresh bundle, he carefully wove it into the top layer of the island, and jumping to their feet suddenly and with clog-like movements, several Uru danced the new patch into place, demonstrating its seamless integration into the island fabric.

images

The Uru's adherence to maintainability principles has driven their continued success and ability to preserve their culture and islands. But the first step toward maintainability is the realization that maintenance is inevitable and the subsequent embracing of and planning for that inevitability.

Within the software development life cycle (SDLC), maintenance can be perceived as an onus but is required to ensure software is updated, receives patches, and continues to meet customer needs and requirements. Maintenance always entails effort and, like grooming the Uros reed islands, is often both necessary and continual. If you don't maintain your software, it will eventually sink into oblivion.

Like everything else in life, maintenance should be about working smarter, not harder. By planning for maintenance activities and by being proactive rather than reactive, the Uru forecast when they will need additional reed bundles and harvest and dry them accordingly. Software maintenance should similarly espouse maintainability principles during planning and design—long before software actually requires maintenance.

But maintenance also can be emergent—just ask the Uru after a storm has swept through the region. In software development, emergency and other corrective maintenance will also be required, but it, too, can be facilitated by modular software design, readability, and other maintainability principles.

Modular design tremendously benefits maintenance by limiting and focusing the scope of work when modifications must be made. The Uru not only bind reed bundles into manageable modules, but also ensure these integrate completely with surrounding reeds to produce a more robust veneer against weight from above and moisture from below. The final step in software maintenance is also integration and should incorporate testing to validate maintenance has met and will continue to meet its objectives.

DEFINING MAINTAINABILITY

Maintainability is “the degree to which the software product can be modified.”1 The International Organization for Standardization (ISO) goes on to specify that maintenance includes changes that correct errors as well as those that facilitate additional or improved performance or function. Moreover, maintainability refers to maintenance not only of code but also of the technical specifications in software requirements. Thus, the ISO definition of maintainability represents a far greater breadth than some definitions that speak only to the ease with which software can be repaired, or repairability.

Like other static performance attributes, maintainability doesn't directly affect software function or performance, but rather enables developers to do so through efficient understanding and modification of software and requirements. For example, maintainability can indirectly improve the mean time to recovery (MTTR) of failed software, by enabling developers to more quickly detect deviations and deploy a solution, thus increasing both recoverability and reliability. Especially in software intended to have an enduring lifespan, maintainability principles should be espoused to quietly instill quality within the code, benefiting developers who maintain the software in the future. Thus, maintainability can be viewed as an investment in the enduring use of software.

Because maintainability facilitates maintenance, this chapter first describes and distinguishes software maintenance types, patterns, and best practices. Failure to maintain software proactively can lead to unhealthy operational environments in which SAS practitioners are constantly “putting out fires” rather than proactively designing stable, enduring software products. The second half of the chapter demonstrates the shift from maintenance-focused operation to maintainability-focused design and development that facilitate more effective and efficient maintenance.

MAINTENANCE

Since maintainability represents the ability to maintain and modify software and software requirements, an introduction to software maintenance is beneficial. Without maintenance, maintainability is unnecessary and valueless. And, with poor maintenance or a poor maintenance plan that is defensive and reactionary, maintainability principles will never be prioritized into software design because developers will be focused on fixing broken software rather than building better software.

Software maintenance is “the totality of activities required to provide cost-effective support to a software system…including software modification, training, and operating a help desk.”2 ISO goes on to define several categories of software maintenance, including:

  • Adaptive maintenance—“The modification of a software product, performed after delivery, to keep a software product usable in a changed or changing environment.”
  • Corrective maintenance—“The reactive modification of a software product performed after delivery to correct discovered problems.”
  • Emergency maintenance—A subset of corrective maintenance, “an unscheduled modification performed to temporarily keep a system operational pending corrective maintenance.”
  • Perfective maintenance—“The modification of a software product after delivery to detect and correct latent faults in the software product before they are manifested as failures.”
  • Preventative maintenance—“The modification of a software product after delivery to detect and correct latent faults in the software product before they become operational faults.”

While software repair represents a component of maintenance, software can be modified for several other reasons. Modifications can be made to correct existing problems, prevent new failures, or improve software through additional functionality or performance. Thus, repairability, a software quality attribute sometimes erroneously used interchangeably with maintainability, represents the ability of software to be repaired for corrective, preventative, or perfective maintenance only, but speaks nothing of adaptability to a changing environment or shifting software requirements.

Corrective Maintenance

Corrective maintenance, including emergency maintenance, addresses software failures in function or performance. Functional failures can be as insignificant as SAS reports that are printed in the wrong font or as deleterious as reports that deliver spurious data due to failed data models. Performance failures can similarly span from processes that complete just seconds too slowly to processes that exceed time thresholds by hours or catastrophically monopolize system resources. That is to say that some failures can be overlooked for a period of time, perhaps indefinitely, while others require immediate resolution.

Where software failure occurs due to previously unidentified vulnerabilities or errors, the failure also denotes discovery, at which point developers must assess how, when, and whether to correct, mitigate, or accept those vulnerabilities and risk of future failure. When the failure comes as a total surprise to developers, corrective maintenance is the only option. Thus, it is always better to have invested the time, effort, and creativity into imagining, investigating, documenting, and mitigating potential sources of failure than to be caught off guard by unimagined failures and be forced to assume defensive posturing.

Since software failure—unlike the end-of-life failures of a tangled Yo-Yo or twisted Slinky—doesn't represent a destroyed product, software can often be restarted and immediately resume functioning. This occurs in SAS when a program uses too much memory, stalls, and needs to be restarted. In many cases the software will execute correctly the second time, perhaps in this example if other SAS programs are first terminated to free more memory. This quick fix may be sufficient to restore functionality, but doesn't eliminate the underlying vulnerability or code defects that enabled the failure to occur.

All software inherently has risk, and stakeholders must determine an acceptable level of risk. As defects or errors are discovered, risk inherently increases. That risk can be accepted or software can be modified to reduce or eliminate the risk. Preventative maintenance is always favored to corrective maintenance because it proactively reduces or eliminates risk before business value has been lost.

Preventative Maintenance

Preventative maintenance aims to control risk by facilitating software reliability and performance through the elimination of known defects or errors before business value is adversely affected. However, if failure has already occurred, although the maintenance activities may be identical, they are corrective, not preventative. Under ideal circumstances, theorized failures are predicted, their sources investigated and documented, and underlying defects corrected before they can cause operational failure.

Preventative maintenance that predicts and averts failure is always preferred to corrective action that responds to a failed event. It allows developers to fully utilize the SDLC to plan, design, develop, test, and implement solutions that are more likely to be stable, enduring, and require minimal future maintenance. This contrasts starkly with emergency maintenance, which often must be implemented in waves, such as a quick fix that patches busted software but must be implemented in several phases as longer-term solutions are delivered. The ability to plan and act while not under the duress of failed software and looming deadlines is always a benefit, as is the ability to prioritize and schedule when preventative maintenance should occur.

The unfortunate reality of preventative maintenance is that stakeholders tend to deprioritize and delay maintenance that would correct theorized, unrealized failures that have never occurred. Effort that should be directed toward preventative maintenance is either reinvested in the SDLC to add more software functionality or, especially in reactionary environments, is channeled toward emergency maintenance activities to address broken software. However, if development teams don't proactively interrogate and eliminate vulnerabilities from software, they can create and succumb to an environment in which they are indefinitely defensively poised under insurmountable technical debt. “Technical Debt,” described in the corresponding section later in the chapter, can be eliminated only through consistent corrective and preventative maintenance.

Adaptive Maintenance

Inherent in data analytic development is variability that must be counterbalanced with software flexibility. Variability can include changes in the data source, type, format, quantity, quality, and velocity that developers may have been unable to predict or prevent. Variability can also include shifting needs of the customer and resultant modifications to technical requirements. Variability is the primary reason why SAS data analytic software can require continuous maintenance even once in production and despite adherence to the SDLC and software development methodology best practices. In some environments in which data injects, needs, or requirements are constantly in flux out of necessity, the line between adaptive maintenance and future development activities may blur. While an acceptable paradigm in end-user development, this represents a significant departure from development of software products that are expected to be more robust, stable, and support a third-party user base with minimal modification.

For example, if a SAS extract-transform-load (ETL) infrastructure relies on the daily ingestion of a transactional third-party data set, and the format of the data set changes unexpectedly overnight, developers may need to scramble the next morning to rewrite code. Although espousing maintainability principles such as modularity and flexibility can reduce this workload, it doesn't eliminate the adaptive maintenance that must occur to accommodate the new format. In a best-case scenario, developers would have predicted this source of data variability and developed quality controls to detect the exception and prevent unhandled failure, possibly by halting dependent processes and notifying stakeholders. However, because the data environment has changed, developers must immediately restore functionality through adaptive and emergency maintenance.

Adaptive maintenance is a real necessity, especially in data analytic development, but because SAS software must often be continually modified to flex to data and other environmental changes, SAS practitioners can learn to espouse the unhealthy habit of never finalizing production software. The importance of maintaining static code that flexibly responds to its environment is the focus of chapter 17, “Stability,” while chapter 12, “Automation,” demonstrates the importance of automating production software, which can only be accomplished through some degree of code stability. In combining these principles, the intent is to deliver mature software that reliably runs and does not require constant supervision or modification.

Requirements Maintenance

While the heart of software maintenance is code maintenance, requirements maintenance is a critical yet sometimes overlooked component. As the needs of a software product change over its lifetime, its technical requirements may be modified by customers or other stakeholders. Thus, the aggregate concept of maintenance includes both code and requirements maintenance and acts to balance actual software performance with required performance through modification of both software and its requirements. Because the majority of maintenance-related activities will entail modifying software to meet established technical specifications, only maintainability principles that support code modification are discussed in this text.

In many cases, requirements—whether stated or implied—are modified in lieu of software modifications. For example, as the volume of data from a data source grows over time, performance failures may begin to occur as the software execution time begins to surpass performance thresholds specified in technical requirements. Either SAS practitioners can implement programmatic or nonprogrammatic solutions that increase execution speed and realign actual performance with expected performance, or stakeholders can accept the risk of increased execution time and do nothing about the slowing software. The latter choice is often an acceptable solution, but it effectively modifies requirements because stakeholders are acknowledging they are willing to accept a lower level of software performance given the increased data volume.

MAINTENANCE IN THE SDLC

Within the SDLC, software is planned, designed, developed, tested, validated, and launched into production. When development has completed, the software product should meet functional and performance requirements for its intended lifetime or until a scheduled update. In Agile development environments, while the expected lifespan of software may be several years, software is likely to be updated at the conclusion of each iteration, which may last only a couple of weeks. In Waterfall environments, on the other hand, software is updated infrequently at scheduled software releases or when an emergency patch is required. Regardless of the software development methodology espoused, maintenance will always be an expected onus of the development process and will typically continue throughout software operation.

At the core of maintenance activities lie software modifications that correct defects, improve functionality or performance, and flex to the environment. Maintenance activities can be prioritized by production schedule milestones, internal detection of defects, external detection of failures, or feedback by users and other stakeholders. Thus, while the maintenance phase of the SDLC is often depicted as a discrete phase, maintenance can compete fiercely for resources with other phases, most notably development. For example, SAS practitioners may need to choose between delivering additional functionality or correcting existing software defects—competing objectives that each require development resources.

The SDLC adapts to a broad range of software development methodologies and practices, from phase-gate Waterfall to Agile rapid development. It moreover adapts to traditional applications development environments in which customers, developers, testers, and users maintain discrete roles, as well as to end-user development environments in which developers are also the principal users. The following sections highlight the importance of software maintenance within the SDLC and across these diverse environments. The “Software Development Methodologies” section in chapter 1, “Introduction,” introduces and distinguishes Agile and Waterfall development methodologies while end-user development is described in the “End-User Development Maintenance” section.

Maintenance in Waterfall Environments

Waterfall software development environments typically cycle through the SDLC only once. Where software patches, updates, or upgrades are required, these are released in the maintenance phase while users operate the software in the operational phase, with both phases occurring simultaneously. However, even software patches—which represent typically rapid responses to overcome software defects, errors, vulnerabilities, or threats—require planning, design, development, and testing activity to ensure the patches adequately resolve the issues. A mini-SDLC is sometimes conceptualized as occurring within the maintenance phase alone, as developers strive to maintain the quality and relevance of software but may have to return to the drawing board to design, develop, and test a solution.

In a Waterfall development environment, software maintenance commences after software has been released into production. Because Waterfall-developed software is by definition delivered as a single product (rather than incrementally as in Agile), all required functionality and performance should be included in the original release. At times, however, it becomes necessary to release software despite it lacking functional elements or performance that can be delivered through subsequent upgrades. In some cases, latent defects are known to exist in software but are included in the initial software release with the intention of correcting them in future software patches or upgrades. But in all cases of Waterfall development, the vast majority of development activities occur in the development phase, with only minor development occurring during maintenance.

Therefore, in Waterfall environments, maintenance activities are less likely to be in direct competition with development activities for personnel resources. Because development, testing, and deployment of software have already concluded, developers don't have to make the difficult choice between developing more functionality in a software product or maintaining that product. Often in Waterfall environments, the developers who maintain software—commonly referred to as the operations and maintenance (O&M) team—are not necessarily the authors of the software, which can further prevent conflict and competition of personnel resources. This contrasts sharply with Agile environments, discussed in the next section, in which stakeholders often must prioritize maintenance activities against development activities. Notwithstanding this complexity in Agile development, a wealth of other benefits drives the overwhelming appeal of Agile to software development teams.

Maintenance in Agile Environments

Agile methodologies differ from Waterfall in that they are iterative and incremental, releasing updated chunks of software on a scheduled, periodic basis rather than as a single product. For example, users might receive upgraded software releases every two weeks or every month, depending on the length of the development cycle, a time-boxed period often termed an iteration or sprint. Because the entire SDLC occurs within each iteration, despite being released with a higher frequency, software releases should evince commensurate quality delivered through design, development, testing, and validation activities.

Within Agile development environments, a single release of software at the close of an iteration doesn't signal project or product completion, because a subsequent iteration is quick on the heels of the first. Thus, as developers are delivering the first release of software, they may already be preparing for the next release in another two weeks. As unexpected vulnerabilities or defects that require maintenance are discovered, developers must balance this added maintenance with their primary development responsibilities. Users will want their software to function correctly, customers will want software delivered on time, and developers can get caught in the middle if all stakeholders don't adopt a uniform valuation and prioritization of software maintenance with respect to software development activities.

Because maintenance and development activities compete directly for system resources in Agile development environments, developers are sometimes encouraged to minimize maintenance activities. Maintenance reduction in some environments occurs because maintenance activities do not get prioritized, thus technical debt builds and software grows increasingly faulty over time until it is worthless. In more proactive environments, however, maintenance reduction occurs because higher-quality software is developed that requires less maintenance. Espousing software maintainability principles during design and development facilitates software that requires less extensive maintenance and, therefore, software that more quickly frees developers to perform their primary responsibility—software development.

Applications Development Maintenance

In a traditional software development environment, software developers design and develop software. They often bear additional software testing responsibilities and may work in concert with a dedicated quality assurance team of software testers. Once software is developed, tested, and validated, it's deployed to third-party users, who operate the software while an O&M team may separately maintain the software. In some cases, the customer is equivalent to the user base and has sponsored and paid for the software to be developed. In other development models, the customer directs software development for a separate group of users. In nearly all cases, however, the developers of the software product are not its primary intended users.

Whether operating in a Waterfall or Agile environment, maintenance can be complicated in applications development because developers do not actually use the software. This is sometimes referred to as the domain knowledge divide, in which software is designed to be used within a certain domain, such as medicine or pharmaceuticals, that requires specific knowledge in that field. Formalized education, training, licensure, or certification additionally may be required to certify practitioners as domain experts. For example, medical doctors must pursue years of schooling, including residency and board certifications, to gain domain expertise, similar to many other professions.

However, software developers authoring medical software are unlikely to be physicians themselves. A very few may be and, while most will have gained some medical domain knowledge while working on related software development projects, a significant domain knowledge divide will exist between these software developers and software users (physicians or medical practitioners). With the exception of a few ubiquitous applications like Gmail, which is no doubt also utilized by the Google developers who develop it, the majority of software developers never actually use the software they write.

This context is critical to understanding and implementing software maintenance in traditional applications development environments. User stories are often written (by developers or customers, but from the perspective of users) to identify deficits in function or performance, in an attempt to place developers in the shoes of software users. In these environments, business analysts often act as the critical go-between, ensuring that business needs are appropriately transferred to developers and that software products meet the needs of their intended user base. Business analysts or a similar cadre are often essential translators because they possess sufficient technical expertise and domain knowledge to communicate in both of these worlds.

Although SAS data analytic development rarely describes software intended to be sold or distributed to external, third-party users, an applications development mentality (or team organization) can exist in which SAS practitioners write software to be used internally by separate analysts or consumers. Especially in situations where analysts lack the technical expertise or user permissions to modify the SAS software that they utilize, formalized maintenance planning and prioritization become paramount and should be in place to ensure software users don't adopt a learned helplessness mentality as they languish waiting for software updates.

End-User Development Maintenance

End-user development is a common paradigm in data analytic development and arguably the development environment in which the vast majority of all SAS software is written. This occurs because SAS software is developed not to be sold or distributed but rather to process data, answer analytic questions, and impart knowledge to internal customers. The resultant data products are often distributed outside the original team, environment, and organization, but the software itself is typically not distributed. I have written thousands of SAS programs that have been executed once and subsequently discarded, not because they failed, but because they succeeded in conveying information necessary for my team to produce some data product that had ultimate business value for my customer.

End-user development effectively eliminates the domain knowledge divide discussed in the “Applications Development Maintenance” section. When a technically savvy doctor is developing SAS code to support medical research or other medically related functions, no interpretation between developers and users is required because they are one and the same. The doctor has domain knowledge (in medicine), understands the overarching need for the software, can design and develop the software solution in SAS, and ultimately understands when all product requirements have been met. This isn't to suggest that end-user development occurs in isolated silos of individuals working alone, but rather that professionals are comfortable with technology and qualified to function as software developers, testers, and users.

Maintenance in end-user development environments in theory is straightforward. As users, SAS practitioners understand which functionality and performance of the software is most valuable, and thus can prioritize maintenance that corrects or perfects those areas. No user stories, business analysts, or other intermediaries are required to facilitate maintenance, because a user can often in seconds open the code and begin correcting deficiencies or adding functionality or performance. Straightforward in theory, yes—but often abysmal in practice.

When an end-user developer completes software, his priority typically turns immediately to using that software for some intended purpose. In SAS development, this can entail data analysis, creation of data products, and often subsequent code development empirically driven by results from the original code. When defects are discovered in software, their maintenance competes against analytic and other endeavors, typically future functionality that is envisioned or subsequent data products to be created. Defects are often deprioritized because savvy SAS practitioners can learn to overcome—while not correcting—these vulnerabilities.

For example, a coworker once approached me and tactfully mentioned that a macro I had developed would fail if a parameter was omitted. My flippant response was something akin to “Do you want me to write you a sticky note so you don't forget the parameter?” In many end-user development environments, because the cohort using our software is intimate, we expect them to overcome software threats and vulnerabilities through intelligence and avoidance rather than through robust software design and apt documentation. This attitude can be effective in many environments and reduce unnecessary maintenance but does increase software operational risk.

End-user developers also often have no third-party development team to which they can report software failures because they themselves are responsible for maintenance activities. For example, if fixing a glaring software vulnerability—especially one that I think I'm intelligent enough to avoid when executing the software—means that my data playtime will be curtailed, I may choose to forgo documentation and maintenance in lieu of more interesting tasks. Especially in data analytic development environments, because business value is conveyed through data products, end-user developers (and their customers) are often more concerned with resultant data products than perfecting software through maintenance.

Thus, in end-user development environments, an unfortunate paradigm often persists in which only the most egregious defects are ever resolved. Because users intimately understand the underlying code, they are often able to avoid accidental misuse that would result in failure were others to attempt to use the software. As a result, vulnerabilities are often identified but not mitigated or eliminated in any way. Espouse this practice within an organization and allow it to flourish over the lifespan of software products, and you start to understand why end-user development often gets a bad rap in software development literature.

In many cases, maintenance that would have improved software quality is instead traded for additional functionality. For example, SAS practitioners may be able to produce an extra data product or analytic report rather than performing corrective maintenance to eliminate vulnerabilities. If greater business value is conferred through this functionality, then this is often a wise choice. However, over time, if additional functionality, earlier software release, or lower software costs are repeatedly prioritized above maintenance activities, then the quality of software can diminish, failing to meet not only performance but also functional intent. To remedy this imbalance, quality must constitute both functional and performance aspects and maintenance activities should be valuated to facilitate inclusion in the SDLC.

Because end-user development environments are both more permissive and forgiving, SAS practitioners can often get away with development shortcuts and diminished maintenance without increased risk of software failure or reduced business value. However, to navigate this funambulism successfully, two principles are required. First, the decision not to maintain some aspect of software should be intentional and made in consideration of the risks and benefits, just as the original decision not to include software performance should be an intentional one. Second, consensus among stakeholders should drive the valuation of maintenance as well as its opportunity cost—that is, other activities (or profits or happy hours) that could be achieved if maintenance is not performed. With this assessment and prioritization, end-user developers will be well-placed to understand how and when to best maintain software.

A basic tenet of the SDLC requires that production software is used but never modified in situ. Thus, maintenance occurs on copies of software in a development environment wholly separate and distinct from the production software operated by users or run as automated batch routines. In reality, end-user development environments often combine these discrete ideals into a single development–test–production environment. While the physical commingling of SDLC environments does not portend doom and is often a necessity in many organizations, an unfortunate consequence is the commingling of SDLC phases that results when so-called production software is continually modified. Especially when software is modified without subsequent testing and validation, and when that software is immediately made available to users, this cowboy coding approach diminishes the integrity of the software and tremendously reduces the effectiveness of any software testing or validation that predated later maintenance. Thus, regardless of the structure of the development environment, segregation of development, test, and production phases can greatly improve the quality of end-user development and maintenance.

FAILURE TO MAINTAIN

Developing software with maintainability in mind does not describe immediately fixing every identified defect or error. Some defects should remain in place because they represent minor vulnerabilities and the effort to fix them could be better invested elsewhere. As demonstrated previously, end-user developers may also be able to produce software with more latent defects because they can intelligently navigate around them, as they themselves operate the software. Notwithstanding the development environment and methodology espoused, technical debt can insurmountably grow if SAS practitioners fail to perform maintenance activities causing an imbalance between expected and actual software performance.

Technical Debt

Technical debt describes the accumulation of unresolved software defects over time. Some defects represent minor errors in code and relatively insignificant vulnerabilities to software performance while others can pose a much greater risk. Regardless of the size or risk of individual defects, their accumulation and compounding effects can cripple software over time. Technical debt, like financial debt, also incurs interest. As the volume of defects grows, software function and performance typically slip farther away from established needs and requirements, leading stakeholders to lose faith in the ability of the software ever to return to an acceptable level of quality.

Technical debt can be used as a metric to demonstrate the current quality of software as well as to anticipate the trajectory of quality. Only defects found in production software should be included in technical debt; thus, in Agile environments in which some modules of software have been released while others are still in development, debt should be assessed as defects only in released software. A risk register can capture technical debt and essentially inform developers of the totality of work required to bring their software from its current quality to the level required through technical specifications.

While a static representation of technical debt is important, more telling can be a longitudinal analysis demonstrating debt velocity. By assessing the volume of defects on a periodic basis, stakeholders can better understand whether they are gaining or losing ground toward the desired level of quality. This type of analysis can demonstrate unhealthy software trajectories in which technical debt is outpacing the ability of developers to maintain software and can enable teams to more highly prioritize maintenance activities. Technical debt conversely can demonstrate to stakeholders which software products consume too many resources to maintain and would be better abandoned in lieu of newly designed or newly purchased software. But to make these decisions, debt must be recorded and analyzed.

More than crippling software, technical debt can psychologically poison software development and operational environments. Where defects cause decreased performance or real failure, users may lose confidence in the software's ability to meet their needs. Developers will be frustrated by technical debt when it grows to become an obstacle from which recovery of quality seems unimaginable or improbable. And customers and other stakeholders can lose confidence in the development team for failing to remedy software defects and to prioritize maintenance activities into the SDLC.

While the elimination of technical debt is most commonly conceptualized as occurring through software maintenance, debt can also be eliminated by lowering expectations—that is, accepting that lofty or unrealistic software requirements should be abandoned or tempered in favor of obtainable goals. In the previous “End-User Development Maintenance” section, I told my coworker to put a sticky note on his monitor if he was baffled by undocumented macro parameters that could lead to software failure. He was hoping I would reduce technical debt with a programmatic solution to make the macro more robust and readable, while I was content to reduce debt by accepting riskier software; ultimately our customer made the decision. But this demonstrates that not all vulnerabilities, threats, or defects should be considered technical debt, especially those that have been accepted by the customer and that are never intended to be remedied. Yes, software deficiencies such as those my coworker exposed should be included in a risk register, but that register should also indicate the customer's acceptance of identified risks.

Maintainability plays an important role in reducing technical debt because it enables developers to perform maintenance activities more efficiently. Greater efficiency means more defects can be resolved in the same amount of time, facilitating software that can be operated at or above—rather than below—optimal performance levels. Moreover, when maintainability principles are espoused, developers tend to loathe maintenance less because their maintenance efforts are efficient and organized, rather than haphazard and defensive.

Measuring Maintenance

As with other static performance requirements, it's difficult to measure maintainability directly because it's assessed through code inspection rather than software performance. Thus, maintenance is often used as a proxy for maintainability, including both maintenance that has been performed and maintenance that has yet to be performed, often referred to as technical debt. Maintenance levels can generally be assessed relative only to a specific team or software product but, when tracked longitudinally, can yield tremendous information about the quality of software and even the quality of the software development environment.

A team that spends an inordinate amount of time maintaining software (as compared to developing software) could be producing poor-quality software that requires a tremendous amount of corrective maintenance. However, it could also be the case that standards or requirements are shifting rapidly, causing significant adaptive maintenance to ensure that software continues to meet the needs of its customers. Another possible cause of high maintenance levels is gold-plating, in which developers continue fiddling with and trying to perfect software, even to the point of unnecessarily exceeding technical specifications. Thus, while the volatility of technical needs and requirements will shape the maintenance landscape of a software product, the extent to which maintenance can be minimized through software development best practices will enable developers to invest a greater proportion of their time into development—be that programmatic development of software or subsequent nonprogrammatic development of data products.

Another way to assess maintenance levels is by inspection of the failure log, introduced in the “Failure Log” section in chapter 4, “Reliability.” All logged failures that were previously described in a risk register (described in chapter 1, “Introduction”) represent software vulnerabilities that have been identified, were not mitigated or eliminated, and that were later exploited to cause software failure. Software that is continually failing due to known defects or errors reflects a lack of commitment to maintenance and possibly a difficulty in maintaining software due to a lack of maintainability principles. On the other hand, if software is not required to be robust and reliable, it may often fail due to identified vulnerabilities, and these issues should not be corrected if they have not been prioritized.

Measuring Technical Debt

Technical debt, a measure of prospective maintenance, assesses maintenance irrespective of past software failure or past maintenance levels. Instead, technical debt represents the sum of maintenance tasks that have been identified, prioritized for completion, yet not completed. Technical debt is distinguished from outstanding development tasks that have not been completed or possibly even started. It essentially represents work that was “completed” yet found to contain defects or errors, or “completed” yet not really because it lacked expected function or performance. Technical debt amounts to writing an IOU note in code, and until the IOU is paid, the debt remains.

For example, the following code is reprised from the “Specific Threats” section in chapter 6, “Robustness,” in which numerous threats to the code are enumerated. One threat is the possibility that the Original data set does not exist, which exposes the vulnerability that the existence of Original is not validated before its use. If software requirements state that the software should be reliable and robust to failure, then whether by design or negligence, this code incurs technical debt:

data final;
   set perm.original;
run;

Maybe the SAS practitioner completed the functional components of the code and intended to incorporate robustness at a later time. Or perhaps the developer did not think to test the existence of Original, thus not perceiving the threat. Regardless of intent or motive, technical debt was incurred but can be eliminated in part by first validating that Original does exist:

%if %sysfunc(exist(perm.original)) %then %do;
   data final;
      set perm.original;
   run;
   %end;

Other vulnerabilities do still exist, but this has made the software slightly more robust, so long as business rules and additional exception handling (not shown) either enter a busy-waiting cycle or dynamically perform some other action based on the exception to drive program flow. While technical debt is typically eliminated through software maintenance (which mitigates or eliminates risk), it can also be eliminated through risk acceptance and modification of requirements. For example, a developer could assess that the original code could fail for a dozen reasons, but choose to accept these risks so he can press on to deliver additional functionality or other business value. This decision essentially lowers the expected reliability and robustness of the software but, in many environments and for many purposes, this would be advised because the corresponding risk is so low that it doesn't reduce software value.

It's also important to conceptualize technical debt in terms of the entire SDLC, not just development tasks. If a latent defect is discovered and the decision is made to correct it (at some point), the defect adds technical debt until the corrective maintenance is completed. However, corrective maintenance should include not only fixing the code, but also performing other related SDLC activities, such as updating necessary documentation, unit testing, integration testing, regression testing, and software acceptance. In many cases, technical debt is accrued not through errors made in code but rather because developers are more narrowly focused on delivering functionality than performance. In other cases, the technical debt may have been fully eliminated, but until software integrity is restored through testing and validation, the complete resolution of that debt is not yet realized.

The risk register remains the best artifact from which to calculate technical debt, but must itself be maintained. Vulnerabilities should not be removed until they have been eliminated or accepted, and elimination always should denote that software modifications were appropriately tested and, if necessary, sufficiently documented. The risk register is introduced and demonstrated in chapter 1, “Introduction.” In actuality, entries in the risk register are not deleted when they are resolved, but rather amended somehow to demonstrate completion and inactivity. Historical entries in a risk register—including both resolved and unresolved entries—can paint a picture of the development environment and its ability to manage, prioritize, and perform maintenance effectively.

MAINTAINABILITY

That software must be maintained is a rather salient reality of all software products. That software can be intentionally designed to facilitate more efficient and effective maintenance is somewhat more of an enigma, unfortunately even among some developers and their customers. The effects of maintenance are sometimes immediately perceived, especially where corrective or emergency maintenance were required. Building maintainable software, on the other hand, doesn't fix anything, but rather makes software more readily fixed in the future. Thus, maintainability, like other static performance requirements, always represents an investment in software and has intrinsic benefits that can't be immediately demonstrated.

From Maintenance to Maintainability

The mind-set shift from maintenance-focused to maintainability-enabling requires a shift from a reactive “putting out fires” mentality to one of proactive fire prevention. Part of the reticence to implement maintainability and other static performance requirements occurs because while the efforts and effects of dynamic performance requirements are readily identified through software performance, those of static performance requirements are not. Making software more maintainable doesn't add function, make it run faster, or improve other measurable characteristics that can be assessed when software executes. The inherent advantages of maintainability are observable only in the future and only once software has failed or otherwise needs to be modified. Notwithstanding, as teams perceive that they are investing too great a proportion of development effort into software maintenance, a sure way to reduce future maintenance is to increase software maintainability.

Principles of Maintainability

If maintainable software is the goal that will lessen maintenance woes, how do you get there, and how do you know when you've arrived? As a composite attribute, maintainability isn't easy to pin down because it represents an aggregate of other static performance characteristics discussed throughout later chapters. These include several other aspects of software quality:

  • Modularity—Modular software can be modified more easily because it is inherently composed of discrete chunks that are more manageable and understandable. Because modules should be loosely coupled and interact with a limited segment of software, maintenance requires less modification and testing to effect change.
  • Stability—Stable software resists the need for frequent modification because it flexibly adapts to its environment. Because stable software requires less maintenance, when maintenance is performed, it can be done so in a proactive, enduring fashion.
  • Reusability—When software can be reused, including the reuse and repurposing of discrete modules of code in other software, maintainability is improved because software maintenance can be accomplished by changing code in one location to affect all software that uses those shared modules.
  • Readability—Software that is straightforward and easily understood can be modified more readily because less time is required to investigate and learn intricacies of code. This can be accomplished through comments and documentation, clear syntax, standardized formatting, and smaller single-function modules.
  • Testability—Testable software can be more readily validated to demonstrate its success as well as to show the results of exceptions or errors. Because software that is modified during maintenance activities must be subsequently tested and validated before being rereleased for production, the ease with which it can be tested expedites maintenance.

Complexities of Maintainability

Static performance attributes such as maintainability have no immediate benefit to software although they bear substantial long-term benefit. Stakeholders are more likely to disagree about the intrinsic value of software maintainability because it is difficult to define and quantify. Developers who best understand the technical intricacies of underlying code might want to instill maintainability principles to allay future maintenance when modifications are necessary. Yet they might face opposition from customers who instead value greater functionality that provides immediate, observable benefit. Or, conversely, customers who best understand the vision and long-term objectives of software might want to instill maintainability principles, but developers might be more focused on short-term technical requirements. Thus, a path to maintainable code requires clear communication between all stakeholders to identify its true value in software.

A tremendous psychological factor may also discourage stakeholders from discussing maintainability during software planning and design, when requirements are formulated. At the outset of a software project, discussion is forward-leaning and positively focused on creating excellent software. Discussing developer errors, software defects, software failure, and its eventual demise can be an uncomfortable conversation before even one line of SAS code has been written. Despite this discomfort, maintainability principles should at least be considered in requirements discussions to identify whether they can be implemented to facilitate later successful and efficient maintenance. If no other conversation is had, the discussion and decision about intended software lifespan will at least provide SAS practitioners and other stakeholders with some understanding of the role that maintainability should have within the software.

A further complexity of maintainability is its location at the intersection of software development and operational service. Technical requirements documentation typically specifies the required function and performance that software must achieve. In many environments, however, software operations are managed separately from development by an O&M team. Where development and operations teams are separate, they may espouse competing priorities, where the development team is focused on delivering initial functionality and performance while the O&M team bears the responsibility of software maintenance. In all environments, it's important for developers to understand O&M priorities and requirements—whether these are the responsibility of the developers or of a separate team—to ensure that software will sufficiently meet performance requirements once released.

Failure of Maintainability

A failure to maintain software leads to its eventual demise, as it may fall victim to defects or errors, a changing environment, or evolution of needs and requirements. On the other hand, a failure to develop maintainable software can increase technical debt and decrease software quality, as SAS practitioners must exert more effort to perform equivalent maintenance. A failure of maintainability thus decreases the efficiency with which developers can provide necessary modifications to software and, in some cases, ultimately contributes to the decision not to perform necessary software maintenance.

When software is not maintained, functional or performance failures are often cited as ramifications of poor maintenance practices. For example, software fails, a cause is determined, examination of the risk register reveals that the specific vulnerability had been identified four months earlier, and suddenly stakeholders are asking “Why wasn't the corrective maintenance performed?” A lack of maintainability in software, conversely, is often highlighted when maintenance activities take forever to complete. If a SAS practitioner must take days to modify software in a dozen different places just to effect a slight change in function or performance, a lack of maintainability (and modularity) may be the culprit. Or if your ETL software fails and it's such a rat's nest of poorly documented code that you have to call a former coworker who built it to beg for guidance—you may have a maintainability crisis on your hands.

Requiring Maintainability

Maintainability is difficult to convey in formal requirements documentation because it represents an aggregate of other static performance characteristics. When documenting technical specifications, it's typically more desirable to reference modularity, readability, or other attributes specifically, with the awareness that they collectively benefit maintainability.

One of the most important aspects of requirements documentation that will influence the inclusion of maintainability principles in software is the intended software lifespan. Ephemeral analytic software that may be run a few times and then discarded will be less likely to require maintenance. However, SAS software with a longer intended lifespan and especially underpinning critical infrastructure will require more cumulative maintenance throughout its lifespan, benefiting SAS practitioners who can more swiftly and painlessly perform that maintenance.

Critical software also often requires higher reliability and availability, as discussed throughout chapter 4, “Reliability,” and high availability requires that software recover quickly when it does fail. When corrective or emergency maintenance is required in the recovery period (during which software functionality is lost), the ease and speed with which maintenance can be performed demonstrates recoverability but is driven by maintainability principles. Thus, when technical specifications require a recovery time objective (RTO) or maximum tolerable downtime (MTD), described in the “Defining Recoverability” section in chapter 5, “Recoverability,” these objectives can be facilitated by incorporating static performance requirements into software.

WHAT'S NEXT?

In many software quality models, static performance attributes are subsumed under a maintainability umbrella, similar to how dynamic performance attributes may be subsumed under reliability. Modularity, more so than all other static performance attributes, contributes to maintainable software by improving software readability, testability, stability, and reusability. In the next chapter, modular software design is demonstrated, which incorporates discrete functionality, loose coupling, and software encapsulation.

NOTES

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset