Images

Domain 4

Security Architecture Analysis

INFORMATION SYSTEMS ARCHITECTURE depends on many factors such as awareness of threats, the identification of risk, and the value of data, as well as attention to standards and best practices as they apply to the systems being de-signed and operated. The security architect should apply the best practices and standards for network and infor-mation systems design and implement an architecture that will provide an appropriate level of security and reliability for the enterprise given the requirements and constraints that they have to operate within. This requires the evaluation and selection of different architectures as well as an understanding of the risk(s) associated with each type of design. Key areas of knowledge include:

  1. Analysis of design requirements

  2. Valuation of data

  3. Design architecture

  4. Understanding information systems security standards and guidelines

  5. Assessment of the information systems security design effectiveness

  6. Attack vectors

The system can be defined as an integrated collection of people, products, and processes that provide the capabilities to satisfy the stated needs or objectives of the design.

Requirements analysis begins with understanding what the customer’s goals and needs are for the system. This is done through the process of gathering requirements that are designated by the customer. Various items such as Statements of Work (SOW), contract documents, system specifications, Service Level Agreements (SLA), Requests for Proposals (RFP), legal and regulatory documents, interviews with designated representatives of the business such as stakeholders and power users, and / or any other types of information that will bear on system design and as a result, need to be taken into account in the design phase.

Functional requirements come in the form of business or mission needs of the customer, who is looking to automate a particular set of capabilities and functionality. These requirements reflect those needs and desires. They are also driven by the need to satisfy business policy and regulatory compliance.

Security requirements complement system functional requirements by addressing the needs to provide protection of the information systems, its data, and its users. Security requirements flow down from the functional requirements. They are typically addressed separately from functional requirements, but at the same time, act to complement the functional requirements by addressing concerns regarding the confidentiality, integrity, availability, and accountability of the information system, and the system protection needs.

Requirements analysis is critical to the success of a project. In systems design, requirements analysis encompasses those tasks that go into determining the conditions that must be met by new or altered products, taking into account any conflicting requirements from various stakeholders. It is sometimes referred to as requirements gathering, requirements capture, or requirements specification. Requirements must be actionable, measurable, testable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design. Requirements analysis is used to develop functional and performance requirements. Customer requirements are translated into a set of requirements that define what the system must do and how well it must perform.

The goal is to ensure that the requirements are understandable, unambiguous, complete, and concise. Require-ments analysis must clarify and define functional requirements and design constraints. Functional requirements define quantity (how many), quality (how good), coverage (how far), time lines (when and how long), and availability (how often). Design constraints define those factors that limit design flexibility, such as environmental conditions, defense against internal or external threats, and contract, customer or regulatory standards.

Requirements are captured in a variety of ways. Usually, they are captured in a table, spreadsheet, or database. There are many software packages available today that not only allow for the capture of requirements, but also contain features that allow the requirements to be traced to the solution, develop test cases, and contain pointers back to the requirement to ensure validation.

Security requirements typically come from two sources: best practices that are industry standards for safety and security, and regulatory requirements that are mandated by federal, state, local, or international law. Additional requirements are sometimes included that may be considered unnecessary but may be forward looking for future growth. In the absence of functional, legal, or regulatory requirements levied on the system, at a minimum, the security architect should recommend and insist on implementing industry best practices as a measure of due diligence and ethics.

TOPICS

Images   Identify Security Architecture Approach

Images   Types and scope (e.g., enterprise, neiwork, SOA)

Images   Frameworks (e.g., Sherwood Applied Business Security Architecture (SABSA), Service-Oriented Modeling Framework (SOMF))

Images   Supervisory Control and Data Acquisilion (SCADA) (e.g., process automation networks, work interdependencies, monitoring requirements)

Images   Perform Requirements Analysis

Images   Business and functional needs (e.g., locations, jurisdictions, business sectors, cost, stakeholder preferences, quality attributes, capacity, manageability)

Images   Threat modeling

Images   Evaluate use cases (e.g., business rules and control objectives, misuse, abuse)

Images   Gap analysis

Images   Assess risk

Images   Apply maturity models

Images   Design Security Architecture

Images   Apply existing information security standards and guidelines (e.g., ISO/IEC, PCI, NIST)

Images   Systems Development Life Cycle (SDLC) (e.g., requirements traceability matrix, security architecture documentation, secure coding)

Images   Application Security (e.g., Commercial Off-the-Shelf (COTS) integration)

Images   Verify and Validate Design

Images   Validate threat model (e.g., access control attacks, cryptanalytic attacks, network attacks)

Images   Evaluate controls against threats and vulnerabilities

Images   Remediafe gaps

Images   Independent verification and validation

OBJECTIVES

Security Architecture Analysis depends on diligence and attention to standards, awareness of threats, and identification of risks. The Security Architecture Professional should:

Images   Know and follow the best practices and standards for network and information systems design

Images   Implement an architecture that will provide adequate security to accomplish the business goals of the enterprise.

Images   Evaluate and select appropriate architectures

Images   Understand the risks associated with each type of design.

Risk Analysis1

A risk analysis should be conducted to determine the requirements and any risk to the system or data processed, stored, or transmitted. Risks should be mitigated to an acceptable level. There are numerous risk analysis methods including Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)2, National Institute of Standards and Technology (NIST) Special Publication 800-303, and ISO/IEC 270054. Broadly, risk analysis can be categorized as either Quantitative or Qualitative in approach.

Quantitative Risk Analysis

This approach employs two fundamental elements; the probability of an event occurring and a value or measure for the loss should it occur.

Quantitative risk analysis makes use of a single figure produced from these elements. This is called the ‘Annual Loss Expectancy (ALE)’ or the ‘Estimated Annual Cost (EAC)’. This is calculated for an event by multiplying the potential loss from a single event occurrence by the estimated occurrence rate for a period of time, which is one year. The formula for this is as follows:

ALE = SLE * ARO

Qualitative Risk Analysis

This is by far the most widely used approach to risk analysis. Probability data is not required and only estimated potential loss is used.

Most qualitative risk analysis methodologies make use of a number of interrelated elements:

THREATS

These are things that can go wrong or that can ‘attack’ the system. Examples might include fire, fraud or hacking. Threats are present for every system.

VULNERABILITIES

These weaknesses make a system more prone to attack by a threat or make an attack more likely to have some success or impact.

CONTROLS

These are the countermeasures for vulnerabilities. There are four types:

  1. Deterrent controls reduce the likelihood of a deliberate attack.

  2. Preventative controls protect vulnerabilities and make an attack unsuccessful or reduce its impact.

  3. Corrective controls reduce the effect of an attack.

  4. Detective controls discover attacks and trigger preventative or corrective controls.

Risk Theory

It is unrealistic to think that 100% protection against all possible threats, at all times, is attainable or even desirable. Organizations require a risk-based management process that weighs potential impacts or losses, which may be expected to occur in the presence of a given vulnerability (with a particular threat likely), against the business resource cost of mitigating or eliminating the risk. The qualitative expression of this approach is as follows:

R(risk)=V(vulnerability)×T(threat)×I(impact)C(countermeasures)

Risk assessments evaluate the sensitivity and criticality of the system or application data to the vulnerabilities, threats, impacts, and potential countermeasures that may exist in its environment. A risk analysis includes the following activities:

Images   Develop a business case

Images   Perform system characterization

Images   Conduct threat analysis

Images   Perform impact analysis

Images   Perform vulnerability and control analysis

Images   Develop a risk mitigation strategy

Images   Determine the risk level

Images   Report the residual risk

By conducting these risk assessment activities, the security architect can focus security countermeasures where they provide the most protection for the system and data.

No matter what the size of the enterprise may be, the following steps should be completed when defining the security requirements:

Step 1 - Identify requirements - Requirements come from formal proposals, statements of work, specifications, industry best practices, and other sources to form a baseline. Also included are user needs discussions with managers, and review of existing documentation for the operational system. The results should be presented and discussed with the business owners and stakeholders.

Step 2 - Verify and validate requirements - Before finalizing the baseline requirements, they should be verified and validated with the stakeholders to gain consensus. This is crucial because stakeholders’ understanding of the requirements and the vetting process will help avoid scope creep, schedule delays, and general confusion.

Step 3 - Document the requirements - Requirements documentation provides a basis for architecting and designing the solution. Key personnel who are present at the beginning of the project may not be there throughout the development life cycle. Therefore, it is an excellent idea to get the requirements signed off by the stakeholders.

In defining requirements, careful consideration should be given to how a requirement is crafted. It is worth the extra time to develop and vet good requirements. Why is this so important? Well, experience has shown that incomplete or missing requirements are the major reasons for unsuccessful projects, resulting in a greater number of system defects 5. These defects eventually surface late in the development phase or after delivery to the users and end up as punch list items that must be addressed before final sign-off. System defects are very time consuming and expensive to correct. Poorly written requirements can also lead to a continuous stream of new requirements designed to fill the gaps and inadequacies found throughout the project. New requirements cause a great deal of rework and extend development time and costs. This is a form of scope creep that should be avoided.

Requirements that are ambiguous, untestable, and not capable of fully satisfying the identified needs of the users can cause higher development costs, schedule slippage, and unhappy customers. Therefore, organizations must emphasize the importance of requirements definition to ensure that they are clear, meaningful, effective, and efficient.

Table 4.1 provides an example of two requirements that are analyzed to interpret the meaning of the requirement: what it takes to satisfy the requirement and the validation of the requirement. In this case, identification and authentication and access control is reviewed. Extensive details are given to explain and understand the requirement and a number of ways it can be satisfied and validated.

Attack Vectors

The security architect should be intimately familiar with the most common types of attacks in order to select countermeasures that will be able to successfully combat them. An attack vector is a path or means by which an attacker can gain access to a computer or network server in order to deliver a payload, resulting in a malicious outcome. Attack vectors enable hackers to exploit system vulnerabilities, including the human element. Attack vectors can include viruses, e-mail attachments, Web pages, pop-up windows, instant messages, chat rooms, and deception. All of these methods involve programming or, in a few cases, hardware, except deception, in which a human operator is fooled into removing or weakening system defenses.

To some extent, firewalls and antivirus software can block attack vectors. But no protection method is totally attack proof. An effective defense method today may not remain so for long, because hackers are constantly updating attack vectors and seeking new ones, in their quest to gain unauthorized access to networks and computer systems. Some of the most common malicious payloads are viruses, Trojan horses, malware, and rootkits.

Methods of “Vector” Attack

Attack “vector” refers to the method of attack: the attacker’s choice of weapon to infiltrate a system, with the end goal of gaining control and / or causing the normal operations of the system to become compromised in a variety of ways. E-mail attachments are a prime example. It is often easy to get them past the firewall, or screening device. In most cases, the human element is the end target. Users are often THE weakest link in the system, and as a result, once they click to open attachments, they actively carry out the required action to make this form of attack popular and successful.

Images

Images

Table 4.1 - Requirements Analysis and Validation

Do not confuse attack vectors with the attack payload however. Historically, worms, for example, would always count on some vector to grant them access to a system. They usually would carry malware, a virus, or a Trojan as their payload. The term “Worm” alluded to the nature of their behavior and replication mechanism. Worms would gain access to a computer, self-replicate, and then crawl out over a network (LAN or WAN) to infect other computers, with no help, or additional support from a human, or any other system in order to spread. Trojan horses, spyware, malware, keyloggers, hijackers, etc., are the kinds of payloads worms are capable of delivering. All attacks against systems will combine a payload with a vector.

While ordinary virus based attacks have been declining, Trojan horses and malware attacks have been on the rise, as hostile software developers move to more damaging types of attacks 6. The number of overall attacks launched against enterprise systems has been increasing dramatically as a result. The attack vectors described below are how most of these attacks are being launched today.

Attack by E-Mail

E-mail messages themselves have become vectors, even though attacks using attachments are still more common. The hostile content is either embedded in the message, or linked to the message in some way. Sometimes, attacks combine the two vectors, so that if the message does not infect the client, the attachment will. E-mail provides a convenient delivery vehicle for deceptions of all kinds. The weak point is the ignorance or imprudence of the computer user, not the computer itself. E-mail attacks continue to advance in sophistication. Criminals are combining their tricks with the techniques of spammers to make these attacks more effective. Millions of messages can be sent out in the hope that a large number of people will be duped.

Spam is almost always a carrier for scams, fraud, dirty tricks, or malicious actions of some kind. Any link that offers something “free” or tempting is to be considered suspect. A user acting on a spam message will usually lead to an outcome that is negative. Attachments (and other malicious files) are one of the most powerful ways to attack a PC. They are a simple way to deliver a highly effective payload. They are being overtaken by Web page trickery, but attachments still pose a major threat to the enterprise systems security architecture due to the rising prevalence of the “Bring Your Own Device” (BYOD) phenomena, and its associated implications for the security architect7.

Attack by Deception

Social engineering in the form of deception is aimed at the user/operator of a computer or a system as the vulnerable entry point. It is not just malicious computer code that organizations need to watch out for. Fraud, scams, hoaxes, and to some extent spam, not to mention viruses, worms, and such, require the unwitting cooperation of the computer user to succeed. Social engineering is the art of convincing people to do something they would not ordinarily do, such as giving up a valuable secret, or exposing a secure system to unauthorized access. Malware developers use social engineering techniques in spam to con people into doing careless things, such as opening attachments that carry viruses and worms or using the telephone to get passwords or other sensitive information.

Hoaxes

Hoaxes are another form of deception that is often an attack vector. Ignorance and gullibility is the target that attackers will seek to take advantage of. Hoaxes can result in an exponentially growing number of messages that can easily swamp an e-mail system. Other hoaxes trick people into damaging their own PC by deleting necessary files.

Hackers

Originally, hacker was a term of respect for experts who could do “cool” things with computers. Some hackers crossed over to the dark side. These villains are more properly known as crackers. The distinction is not often made in the popular press. That annoys hackers, who like to think of themselves as white-hats, aka, good guys. Hackers are a formidable attack vector because, unlike ordinary malicious code, people are flexible, and they can improvise when faced with a dynamic landscape. They use a variety of hacking tools, heuristics, and “social engineering” mechanisms to gain access to computers and online accounts. They often install a Trojan horse so that they can commandeer the computer for their own use at a later time, or repeatedly over a period of time, as needed, to execute the initial breach, and any subsequent follow up access to gain advantage and control within one or more systems.

Web Page Attack

Counterfeit Web sites are often used to extract personal information from people. They look very much like the genuine Web sites that they seek to imitate. Organizations and individuals think they are doing business with someone they trust, but they are really giving personal and business information, such as name, address, and credit card numbers to a scam artist. They are often used in conjunction with spam, which provides the delivery mechanism that gets the user there in the first place.

Pop-up Web pages can install spyware, adware, hijackers, Trojans, or other scamware. They may even close Internet connections, and then make very expensive phones call using an organization’s VOIP system, or PBX. All of these activities are malicious.

Attack of the Worms

Many worms are delivered as attachments via e-mail, but network worms use holes in network protocols to directly propagate themselves between hosts. The Window’s DCOM vulnerability was a prime example of this type of behavior8. Any remote access service, such as file sharing, was vulnerable to this sort of worm. These worms propagate without relying on victims to open attachments. In most cases, a firewall will block system worms, or vulnerable services can be disabled to prevent spread to unaffected systems.

Many worms install Trojan horses. Some can disable ordinary antimalware software, and then install the worm’s payload. Next, they begin scanning the Internet from the computer they have just infected, looking for other computers to infect. If the worm is successful, it propagates rapidly. The worm owner soon has thousands of “zombie” computers, or bots to use for more mischief9. One of the latest trends in worm based attacks is the use of a worm to spread a ransomware payload to multiple systems. Ransomware is a form of malware that is used to infect a targeted PC or system, and then prevent normal system operation and usage until a “ ransom “ has been paid or somehow provided to the controlling agent behind the attack10.

Malicious Macros

Many documents such as those generated by Microsoft Word and Excel, for example, allow the use of macros. A macro can be used to automate spreadsheets, forms, or document templates, for example. The problem is that macros can also be used for malicious purposes, as they can be used to attack a computer directly11. A malicious macro may come from anybody. If they have picked one up, their documents will contain a copy of the malicious macro. Ensuring that the most secure settings possible for macro usage are being deployed and used within the enterprise is the best way for the security architect to have a positive impact in this area. See Figure 4.1 for an illustration of the macro settings available in Microsoft Office 2010.

Images

Figure 4.1 - Trust Center Macro Settings in Microsoft Office 2010

Instant Messaging, IRC, and P2P File-Sharing Networks

These three Internet services rely on connections between a host computer and other computers on the Internet. When using them, the special peer-to-peer (P2P) software installed makes the host machine more vulnerable to hostile exploits. Just as with e-mail, the most important things to be wary of are attachments and Web site links provided either as part of the installation of this software in the first place, or as a result of information exchange through the software once it is installed and configured for use.

Spyware is software that adds hidden components to a system on the sly. It is often bundled with some attractive software or other bait. The stealth process is installed without the knowledge of the end user and then will be used by an external control mechanism to execute remote control over the PC or system.

Viruses

Viruses are malicious computer code that hitches a ride as an attachment or through some other form of transmission such as infection of a file. This behavior makes them the “payload”. The original virus vector, floppy disks, would carry the infected files from machine to machine, as the floppy disk was exchanged between users to transfer one or more files. Now, virus vectors include e-mail attachments, downloaded files, worms, USB drives and more [Happy Trails 2009].

Asset and Data Valuation

In the world of data protection, prevailing practices imply that every piece of an organization’s data is equal to every other piece. This also holds true for all other company assets. Companies do not always see the same value in intangible assets that they do in tangible ones. They tend to use a “one-size-fits-all” approach in both cases, for instance, ensuring that hardware, software, and data are included in the recovery plan for backup, redundancy, business continuance, and other data protection. As a result, priceless data may be poorly protected, whereas relatively unimportant files consume disproportionate amounts of time and resources. What is missing is the concept of asset and data valuation as part of the initial business assessment12.

This assessment should take into account the physical infrastructure, information systems (hardware), people, facilities, and the like. Architecting the system depends on more than just protecting the data. Consider the defense-in-depth approach. It calls for policies, procedures, technology, and personnel to be considered in the system security development process. The customer requirements as well as the regulatory and statutory requirements must also be satisfied within the design functions of the system architecture. Classification of information into categories will be necessary to help identify a framework for evaluating the information’s relative value and the appropriate controls required to ensure its value to the organization.

Different types of data have different values when placed in the context of their business use. One way of determining the value of specific information is in conducting a Business Impact Analysis (BIA) based on how individual departments or business units would be affected if their systems were compromised under a denial-of-service attack, or the data was lost or deleted 13. How quickly could the system be brought back into service? How long would it take to restore the data in the event of such a loss? Only then can the individual or the enterprise determine the appropriate data protection, storage services, and redundancy required for each type of data.

Context and Data Value

Let us take a look at the concept of data valuation in more detail, beginning with an examination of the importance of organizational data. What data merits the greatest data protection investment? According to common practices, core company databases and mission-critical data files are the most obvious choices. As these are typically housed on central servers, Information Technology (IT) departments tend to devote their energies to backing up these files at specific periods (often on a daily or weekly basis), while also providing redundancy through Redundant Array of Inexpensive/Independent Disks (RAID) systems or server replication and other advanced means of data protection. While this approach is certainly better than leaving everything to fate, it neglects the concept of value. For instance, just how impacted would the organization be if it suffered a catastrophic failure of corporate systems? Perhaps all the database records are immediately recoverable, but what about the operating system and applications harnessing that database? Without those, the database is useless. For example, what is the Maximum Time to Repair (MTTR) access to the data or system in question? If the organization can survive a few days without its information systems while operating systems and applications are reinstalled and database files restored, then it may not need to invest in high-availability redundant data storage or protection methods to continue doing business. However, if the company would effectively “die” due to such an event, or would suffer massive losses, then investment in a high-availability redundant system should be a high priority to mitigate such threats. According to David Paulison, a former executive director of the United States’ Federal Emergency Management Agency, 40-60 percent of small businesses do not open their doors after a disaster.14 It is the security architects responsibility to drive this dialog with the business, and ensure that the proper outcomes from the BIA are being derived by the business, and as a result, that the right design choices and architectures are being implemented to mitigate any reasonable issues that the BIA has uncovered.

Corporate versus Departmental: Valuation

Another aspect of context is relative position on the organizational chart. From the standpoint of the enterprise, perhaps only key database files deserve the highest priority in data protection. While that may be correct for the company as a whole, departments or remote offices may require different priorities. Thus, each echelon of management should consider its own data protection needs and take actions accordingly.

In Company X, for example, the sales and marketing database may be assigned the greatest importance. However, at a local level, a software engineering department would probably have a completely different set of priorities and, therefore, different data protection needs. While the corporate IT department may be taking care of database backup and protection, local IT personnel management needs to ensure that their critical information is safeguarded, either by handling storage management locally or by justifying the need to protect that data to the corporate IT department.

Business, Legal, and Regulatory Requirements

Business requirements vary depending on the type of business or enterprise being examined. Parts of the U.S. healthcare industry must comply with Health Insurance Portability and Accountability Act (HIPAA) 15. Other U.S. organizations that handle personal/privacy data may have Sarbanes–Oxley Act of 2002 compliance requirements 16. Organizations that process credit card information must comply with Payment Card Industry Data Security Standard (PCI DSS) 17. Other U.S. businesses or government agencies requirements include compliance with Gramm–Leach–Bliley Act (GLBA) 18 and FISMA 19.

Most European countries’ data protection laws follow principles detailed in two EU directives, whether or not these countries are part of the European Union. These directives are

  1. Directive 95/46/EC of the European Parliament on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data (commonly called the Data Protection Directive) 20 and

  2. Directive 2002/58/EC Concerning the Processing of Personal Data and the Protection of Privacy in the Electronic Communications Sector 21. The first directive applies to the collection, storage, disclosure, and other uses of personal data. The second directive addresses the use of “cookies” and places restrictions on spam, telemarketing, and interception of communications and traffic data 22 [IT Law Group].

As stated earlier, the security architect needs to understand what is specifically called out in national and international policies that affect any systems that they are responsible for. Further, the trends that have been moving data and its supporting infrastructures and services into the cloud, and as a result, into a “boundless” as opposed to a specifically defined “boundary based” architecture that organizations have been used to within traditional IT models for many years is causing tremendous shifts in this area of responsibility for security architects as well as IT professionals more broadly today. The knowledge that is now required for a security architect to address cloud based system designs and usage, as well as security and disaster recovery and business continuity planning has grown to encompass areas such as transitory data pathways for data as it moves between and among cloud based storage endpoints over third party networks, in addition to virtualization of both the end users’ desktop (VDI) as well as the infrastructure operating systems and application loads that create, process, store, and access data.

For instance, privacy laws in one country differ from those of another. Interconnection agreements between other countries must be considered as well as some ground rules that express an understanding of how these countries can operate effectively across international borders. Typically, the most stringent policy prevails because it covers the more loosely coupled policy. But what if there is a disagreement among international parties? How does it get resolved? Memorandums of agreement or memorandums of understanding can help identify a mutually acceptable solution.

Product Assurance Evaluation Criteria

The Common Criteria (CC) was born out of the necessity to expand product security assurance programs in the United States, Canada, United Kingdom, France, and Germany. The goal of the program was to establish a high degree of assurance that products would consistently perform the security function safely and securely when handling data and that failure would not result in the compromise of sensitive information. The expansion of the program also provided a broader market for those products completing the evaluation process by allowing international sales to the nations participating in the program. Some participating nations mandate the use of these products in their information systems. This mandate has translated into requirements for the system under development.

Product evaluations began in the United States with the Orange Book (TCSEC), which was the criterion for evaluating secure systems and vendor products. The Orange Book had an assurance range from D2 up to A-3. The D class had the least amount of rigorous testing, and A class consisted of more formal evaluation methods.

The Orange Book only addressed confidentiality. It was part of the Rainbow series, a set of security guidance named after its colorful covers. Each colored cover addressed a different security topic. The Orange Book and the Rainbow series were developed by the United States’ National Security Agency (NSA), and all certified products were tested by them23. Over time, a backlog of evaluations made the delay in product evaluation less cost-effective. By the time the product reached evaluation, it may have already been at the end of the life cycle. Businesses began to lose interest in this process because there was little return on investment in time and money. They were interested in selling their secure products in the international market as well.

The next evaluation criteria, the ITSEC, was created by Canada, the United Kingdom, France, Spain, and Germany. The United States adopted and participated in the ITSEC. This evaluation criteria addressed integrity as well as confidentiality and was a step in the right direction. ITSEC had classes of assurance products, but the process did not go far enough. So, discussions began to develop a common set of standards that could be agreed to by a consortium of countries and the Common Criteria was established as a direct result of these efforts.

Using these evaluated products is mandated by law for all countries that have signed the arrangement discussed in the following text. For instance, a device such as a firewall seeking an Evaluation Assurance Level (EAL) 4 certification must meet all the requirements set in the criteria for that level of assurance. While conducting a requirements analysis, the security architect must include the functional requirements for that device as identified in the Common Criteria [Common Criteria, 2006].

Common Criteria (CC) Part 1

The Common Criteria philosophy is to provide assurance based on an evaluation (active investigation) of the IT product that is to be trusted. Evaluation is the traditional means of providing assurance and is the basis for prior evaluation criteria documents. The Common Criteria proposes to use expert evaluators to measure the validity of the documentation and the resulting IT product with increasing emphasis on scope, depth, and rigor. It does not comment on the relative merits of other means of gaining assurance. Researchers continue looking for alternative ways of gaining assurance. As mature alternative approaches emerge from these research activities, they will be considered for inclusion in the Common Criteria.

The Common Criteria provides a common set of requirements for the security functionality of IT products and for assurance measures applied to the IT products during a security evaluation. These IT products may be implemented in hardware, firmware, or software. The evaluation process establishes a level of confidence that the security function of IT products as well as the assurance measures applied to these IT products meet these requirements. The evaluation results may help the security architect and the consumers determine whether these IT products fulfill the security needs of the system.

The Common Criteria is useful as a guide for the development, evaluation, or procurement of IT products with security functionality. It addresses protection of assets from unauthorized disclosure, modification, or loss of use. The categories of protection relating to these three types of failure of security are commonly called confidentiality, integrity, and availability, respectively. The Common Criteria may apply to risks arising from human activities (malicious or otherwise) and to risks arising from nonhuman activities. It may also be applied in other areas of IT depending on the nation’s security policies, but makes no claim of applicability in these areas [Common Criteria 2006].

The latest version of the Common Criteria is version 3.1R4. It is based on version 2.3, with updates that include a number of interpretations and editorial changes with no impact on the technical content. These standards have also been published as International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) 15408:2005 and ISO/IEC 18045:2005. The Common Criteria consists of three parts:

Part 1 - Introduction and General Model is the introduction to the Common Criteria. It defines general concepts and principles of IT security evaluation and presents a general model of evaluation. Part 1 also presents constructs for expressing IT security objectives, for selecting and defining IT security requirements, and for writing high-level specifications for products and systems. In addition, the usefulness of each part of the Common Criteria is described in terms of each of the target audiences.

Part 2 - Security Functional Requirements establish a set of functional components as a standard way of expressing the functional requirements for the Target of Evaluation (TOE). Part 2 catalogs the set of functional components, families, and classes.

Part 3 - Security Assurance Requirements establish a set of assurance components as a standard way of expressing the assurance requirements for the TOE. Part 3 catalogs the set of assurance components, families, and classes. Part 3 also defines the evaluation criteria for Protection Profile (PP) and Security Target (ST) and presents evaluation assurance levels that define the predefined Common Criteria scale for rating assurance for the TOE, which is called the Evaluation Assurance Level (EAL).

The purpose of this arrangement is to advance those objectives by bringing about a situation in which IT products and protection profiles that earn a Common Criteria certificate can be procured or used without the need for further evaluation. It seeks to provide grounds for confidence in the reliability of the judgments on which the original certificate was based, by requiring that a Certification/Validation Body (CB) issuing Common Criteria certificates meet high and consistent standards [Common Criteria Part 1, 2006].

A management committee, composed of senior representatives from each signatory’s country, was established to implement the arrangement and provide guidance to the respective national schemes conducting evaluation and validation activities. The list of current arrangement members is discussed in the following text.

In October 1998, after 2 years of intense negotiations, government organizations from the United States, Canada, France, Germany, and the United Kingdom signed the historic recognition arrangement for Common Criteria-based IT security evaluations. The arrangement officially known as the Arrangement on the Mutual Recognition of Common Criteria Certificates in the Field of IT Security was a significant step forward for government and industry in IT product and protection profile security evaluations. The U.S. Government and its foreign partners in the arrangement share the following objectives with regard to evaluations of IT products and protection profiles:

  1. Ensure that evaluations of IT products and protection profiles are performed to high and consistent standards and are seen to contribute significantly to confidence in the security of those products and profiles.

  2. Increase the availability of evaluated, security-enhanced IT products and protection profiles for national use.

  3. Eliminate duplicate evaluations of IT products and protection profiles.

  4. Continuously improve the efficiency and cost-effectiveness of security evaluations and the certification/validation process for IT products and protection profiles.

In October 1999, Australia and New Zealand joined the Mutual Recognition Arrangement, increasing the total number of participating nations to 7. Following a brief revision of the original arrangement to allow for the participation of both certificate-consuming and certificate-producing nations, an expanded Recognition Arrangement was signed in May 2000 at the 1st International Common Criteria Conference by Government organizations from 13 nations. These included the United States, Canada, France, Germany, the United Kingdom, Australia, New Zealand, Italy, Spain, the Netherlands, Norway, Finland, and Greece.

The State of Israel became the 14th nation to sign the Recognition Arrangement in November 2000. As of December 2012, 26 countries are currently part of the Common Criteria Recognition Agreement (CCRA). Sixteen countries (Australia, Canada, France, Germany, Italy, Japan, Malaysia, Netherlands, New Zealand, Norway, South Korea, Spain, Sweden, Turkey, the United Kingdom, and the United States) are Certificate Producers, and ten countries (Austria, Czech Republic, Denmark, Finland, Greece, Hungary, India, Israel, Pakistan, and Singapore) are Certificate Consumers 24.

The Common Criteria evaluated products begin the process by being evaluated in a certified laboratory25. These commercial laboratories agree to use stringent principles and test methods that are approved by the National Information Assurance Partnership (NIAP) members26. The National Voluntary Laboratory Accreditation Program (NVLAP) provides third-party accreditation to testing and calibration laboratories27. The NVLAP is established in response to Congressional mandates or administrative actions by the federal government or from requests by private-sector organizations28.

The NVLAP must be in full conformance with the standards of the ISO/IEC, including ISO/IEC 17025 and Guide 58. NVLAP is required before becoming a Common Criteria Testing Laboratory. The accreditation ensures that the Common Criteria laboratories meet the requirements of ISO/IEC Guide 25, General Requirement for Competence of Calibration and Testing Laboratories, and specific Common Criteria Evaluation and Validation Scheme requirements for IT security evaluations [Common Criteria, 2006].

Common Criteria (CC) Part 2

Part 2 of the Common Criteria defines the Security functional components. It is really the heart of the Common Criteria process. The security functional requirements are expressed in a PP or ST. These requirements describe the security behavior the TOE is expected to meet. The requirements describe security properties that users can detect by direct interaction (i.e., inputs, outputs) with the TOE or by the IT response to stimulus. Security functional components express security requirements intended to counter threats in the assumed operating environment of the TOE and cover any identified organizational security policies and assumptions.

This part of the Common Criteria and the associated security functional requirements are not meant to be a definitive answer to all the problems of IT security. Instead, it offers a set of well-understood security functional requirements used to create trusted products reflecting the needs of the market. These security functional requirements are presented as the current state of the art in requirements specification and evaluation [Common Criteria, 2012].

This part of the Common Criteria contains a catalog of security functional requirements that may be specified for a TOE. A TOE is a set of software, firmware, or hardware possibly accompanied by user and administrator guidance documentation. A TOE may contain resources such as electronic storage media (e.g., main memory, disk space), peripheral devices (e.g., printers), and computing capacity (e.g., CPU time) that can be used for processing and storing information and is the subject of an evaluation. The TOE evaluation is concerned primarily with ensuring that a defined set of Security Functional Requirements (SFR) are enforced over the TOE resources. The SFR defines the rules by which the TOE governs access to and use of its resources, and thus information and services controlled by the TOE.

The SFR may also include one or more Security Functions Policies (SFP). Each SFP has a scope of control that defines the subjects, objects, resources, or information, and operations controlled under it. All SFP’s are implemented by the TOE Security Functionality (TSF), whose mechanisms enforce the rules defined in the SFR and provide necessary capabilities.

Those portions of a TOE that must be relied on for the correct enforcement of the SFR are collectively referred to as the TSF. The TSF consists of all hardware, software, and firmware of a TOE that is either directly or indirectly relied upon for security enforcement.

The Target of Evaluation (TOE)

The Common Criteria is flexible in what to evaluate and is, therefore, not tied to the boundaries of IT products. Instead of the term IT product, the Common Criteria uses the term TOE (Target of Evaluation). While there are cases where a TOE consists of an IT product, this need not be the case. The TOE may be an IT product, a part of an IT product, a set of IT products, a unique technology that may never be made into a product, or a combination of these. The precise relation between the TOE and any IT product is only important in one respect: the evaluation of a TOE containing only a part of an IT product should not be misrepresented as the evaluation of the entire IT product. Examples of a TOE include:

  1. A software application

  2. An operating system

  3. A software application in combination with an operating system

  4. A software application in combination with an operating system and a workstation

  5. An operating system in combination with a workstation

  6. A smart card integrated circuit

  7. The cryptographic coprocessor of a smart card integrated circuit

  8. A local area network including all terminals, servers, network equipment, and software

  9. A database application excluding the remote client software normally associated with that database application [Common Criteria Part 2, 2006]

Table 4.2 lists the primary classes of the security functional requirements. Note that the abbreviations for the classes for the security functions begin with an “F” - this denotes its reference to the functional requirements. (Assurance class acronyms begin with an A.) This can be helpful if a security architect only sees the acronym, such as FPR. Drop the F, and PR makes sense as an acronym for privacy. FAU would indicate Functional (F) and Audit (AU). The functional class description is spelled out in the Security Functional Class column. 29

Evaluation Assurance Level (EAL) Overview

Table 4.3 represents a summary of the EALs. The columns represent a hierarchically ordered set of EALs, while the rows represent assurance families. Each number in the resulting matrix identifies a specific assurance component where applicable. As outlined in the next section, seven hierarchically ordered evaluation assurance levels are defined in the Common Criteria for the rating of a TOE’s assurance. They are hierarchically ordered inasmuch as each EAL represents more assurance than all lower EALs. The increase in assurance from EAL to EAL is accomplished by the substitution of a hierarchically higher assurance component from the same assurance family (i.e., increasing rigor, scope, or depth) and from the addition of assurance components from other assurance families (i.e., adding new requirements).

These EALs consist of an appropriate combination of assurance components as described in Chapter 7 of the Common Criteria Part 3 30. More precisely, each EAL includes no more than one component of each assurance family, and all assurance dependencies of every component are addressed. While the EALs are defined in the Common Criteria, it is possible to represent other combinations of assurance. Specifically, the notion of “augmentation” allows the addition of assurance components (from assurance families not already included in the EAL) or the substitution of assurance components (with another hierarchically higher assurance component in the same assurance family) to an EAL. Of the assurance constructs defined in the Common Criteria, only EALs may be augmented. The notion of an “EAL minus a constituent assurance component” is not recognized by the standard as a valid claim. Augmentation carries with it the obligation on the part of the claimant to justify the utility and added value of the added assurance component to the EAL. An EAL may also be augmented with extended assurance requirements.

EALs are augmented to show increased assurance capabilities or functionality. The additional functions are added from the next higher EAL and show compliance with parts of that level placing emphasis on certain functions. The augmented function should be listed as part of evaluation so that stakeholders will understand what additional capabilities were tested.

The next section contains a list of the EALs followed by a more detailed description of each EAL. The list of EALs and its short title is as follows:

Images

Images

Table 4.2 - Security Function Requirement Classes

Images

Table 4.3 - Evaluation Assurance Level Summary

Evaluation Assurance Level 1 (EAL1) - functionally tested

Evaluation Assurance Level 2 (EAL2) - structurally tested

Evaluation Assurance Level 3 (EAL3) - methodically tested and checked

Evaluation Assurance Level 4 (EAL4) - methodically designed, tested, and reviewed

Evaluation Assurance Level 5 (EAL5) - semiformally designed and tested

Evaluation Assurance Level 6 (EAL6) - semiformally verified design and tested

Evaluation Assurance Level 7 (EAL7) - formally verified design and tested

Evaluation Assurance Level 1 (EAL1) - Functionally Tested

EAL1 is applicable where some confidence in correct operation is required, but the threats to security are not viewed as serious. It will be of value where independent assurance is required to support the contention that due care has been exercised with respect to the protection of personal or similar information. EAL1 requires only a limited security target. It is sufficient to simply state the SFRs that the TOE must meet, rather than deriving them from threats, Organizational Security Policy (OSP)’s, and assumptions through security objectives. EAL1 provides an evaluation of the TOE as made available to the customer, including independent testing against a specification and an examination of the guidance documentation provided. The goal is for an EAL1 evaluation to be successfully conducted without assistance from the developer of the TOE, and for minimal investment. An evaluation at this level should provide evidence that the TOE functions in a manner consistent with its documentation.

Evaluation Assurance Level 2 (EAL2) - Structurally Tested

EAL2 requires the cooperation of the developer in terms of the delivery of design information and test results, but should not demand more effort on the part of the developer than is consistent with good commercial practice. As such, it should not require a substantially increased investment of cost or time. EAL2 is, therefore, applicable in those circumstances where developers or users require a low to moderate level of independently assured security in the absence of ready availability of the complete development record. Such a situation may arise when securing legacy systems, or where access to the developer may be limited.

Evaluation Assurance Level 3 (EAL3) - Methodically Tested and Checked

EAL3 permits a conscientious developer to gain maximum assurance from positive security engineering at the design stage without substantial alteration of existing sound development practices. It is applicable in those circumstances where developers or users require a moderate level of independently assured security, and require a thorough investigation of the TOE and its development without substantial reengineering. EAL3 provides assurance by a full security target and an analysis of the SFRs in that ST, using a functional and interface specification, guidance documentation, and an architectural description of the design of the TOE, to understand the security behavior.

The analysis is supported by independent testing of the TSF, evidence of developer testing based on the functional specification and TOE design, selective independent confirmation of the developer test results, and a vulnerability analysis (based on the functional specification, TOE design, architectural design, and guidance evidence provided) demonstrating resistance to penetration attackers with a basic attack potential.

Evaluation Assurance Level 4 (EAL4) - Methodically Designed, Tested, and Reviewed

EAL4 permits a developer to gain maximum assurance from positive security engineering based on good commercial development practices that, though rigorous, do not require substantial specialist knowledge, skills, and other resources. It is the highest level at which it is likely to be economically feasible to retrofit to an existing product line. It is, therefore, applicable in those circumstances where developers or users require a moderate to high level of independently assured security in conventional commodity TOEs and are prepared to incur additional security-specific engineering costs.

EAL4 provides assurance by a full security target and an analysis of the SFRs in that ST, using a functional and complete interface specification, guidance documentation, a description of the basic modular design of the TOE, and a subset of the implementation, to understand the security behavior.

Evaluation Assurance Level 5 (EAL5) - Semiformally Designed and Tested

EAL5 permits a developer to gain maximum assurance from security engineering based on rigorous commercial development practices supported by moderate application of specialist security engineering techniques. Such a TOE will most likely be designed and developed with the intent of achieving EAL5 assurance. It requires additional costs attributable to the EAL5 requirements, relative to rigorous development without the application of specialized techniques.

EAL5 is applicable in circumstances where developers or users require a high level of independently assured security in a planned development and require a rigorous development approach without incurring unreasonable costs attributable to specialist security engineering techniques. It provides assurance by a full security target and an analysis of the SFRs in that ST, using a functional and complete interface specification, guidance documentation, a description of the design of the TOE, and the implementation, to understand the security behavior. A modular TSF design is also required [Common Criteria Part 2, 2012].

Evaluation Assurance Level 6 (EAL6) - Semiformally Verified Design and Tested

EAL6 permits developers to gain high assurance from application of security engineering techniques to a rigorous development environment in order to produce a premium TOE for protecting high-value assets against significant risks. It is, therefore, applicable to the development of security TOEs for application in high-risk situations where the value of the protected assets justifies the additional costs [Common Criteria Part 2, 2012].

EAL6 provides assurance by a full security target and an analysis of the SFRs in that ST, using a functional and complete interface specification, guidance documentation, the design of the TOE, and the implementation, to understand the security behavior. Assurance is additionally gained through a formal model of select TOE security policies and a semiformal presentation of the functional specification and TOE design. A modular and layered TSF design is also required.

The analysis is supported by independent testing of the TSF, evidence of developer testing based on the functional specification, TOE design, selective independent confirmation of the developer test results, and an independent vulnerability analysis demonstrating resistance to penetration attackers with a high attack potential.

EAL6 also provides assurance through the use of a structured development process, development environment controls, and comprehensive TOE configuration management, including complete automation and evidence of secure delivery procedures. This represents a meaningful increase in assurance from EAL5 by requiring more comprehensive analysis, a structured representation of the implementation, more architectural structure (e.g., layering), more comprehensive independent vulnerability analysis, and improved configuration management and development environment controls.

Evaluation Assurance Level 7 (EAL7) - Formally Verified Design and Tested

EAL7 is applicable to the development of security TOEs for application in extremely high-risk situations or where the high value of the assets justifies the higher costs. Practical application of EAL7 is currently limited to TOEs with tightly focused security functionality that is amenable to extensive formal analysis. EAL7 provides assurance by a full security target and an analysis of the SFRs in that ST, using a functional and complete interface specification, guidance documentation, the design of the TOE, and a structured presentation of the implementation, to understand the security behavior. Assurance is additionally gained through a formal model of select TOE security policies and a semiformal presentation of the functional specification and TOE design. A modular, layered, and simple TSF design is also required.

The analysis is supported by independent testing of the TSF, evidence of developer testing based on the functional specification, TOE design and implementation representation, complete independent confirmation of the developer test results, and an independent vulnerability analysis demonstrating resistance to penetration attackers with a high attack potential. EAL7 also provides assurance through the use of a structured development process, development environment controls, and comprehensive TOE configuration management, including complete automation and evidence of secure delivery procedures. This EAL represents a meaningful increase in assurance from EAL6 by requiring more comprehensive analysis using formal representations and formal correspondence, and comprehensive testing.

Common Criteria (CC) Part 3: Assurance Paradigm

The Common Criteria Part 3 begins with a philosophy of the approach to assurance that will permit the reader to understand the rationale behind the assurance requirements. This philosophy is that the threats to security and organizational security policy commitments should be clearly articulated and the proposed security measures be deemed sufficient for their intended purpose.

Measures should be adopted that reduce the likelihood of vulnerabilities, the ability to exercise (i.e., intentionally exploit or unintentionally trigger) a vulnerability, and the extent of the damage that could occur from a vulnerability being exploited. Additionally, measures should be taken to facilitate the subsequent identification of vulnerabilities and to eliminate, mitigate, or provide notification that a vulnerability has been exploited or triggered.

Significance of Vulnerabilities

There are threat agents that will continue to actively seek to exploit opportunities to violate security policies for illicit gains. These threat agents may also accidentally trigger security vulnerabilities, causing harm to the organization. Because of the need to process sensitive information and the lack of trusted products, there is significant security risk to IT systems that is likely to cause security breaches resulting in significant loss.

IT security breaches come from the intentional exploitation or unintentional triggering of vulnerabilities in the application of IT within business concerns. Steps should be taken to prevent vulnerabilities in IT products. To the extent feasible, vulnerabilities should be:

  1. Eliminated, by taking steps to expose, remove, or neutralize all exercisable vulnerabilities.

  2. Minimized, by taking steps to reduce to an acceptable level residual potential impact of any risks or vulnerability.

  3. Monitored, by taking steps to ensure that any attempt to exercise a residual vulnerability will be detected so that steps can be taken to limit the damage.

The Causes of Vulnerabilities

Vulnerabilities are attributable to a variety of things, including inadequate requirements definition, defects in hardware or software, or misconfigured equipment security settings. The system developer or customer may not adequately define the security requirements, which can lead to inadequate security countermeasures. Vulnerabilities may be introduced into the system as a result of poor development standards or incorrect design choices. An IT product may possess all the functions and features required of it and still contain vulnerabilities that render it unsuitable or ineffective with respect to security. IT products placed in operation may have been configured according to the correct specification, but vulnerabilities may have been introduced as a result of inadequate controls upon the operation. This is not an exhaustive list of all possible reasons for vulnerabilities, but rather, a general overview of the types of causes that may affect the security of the system.

Common Criteria Assurance

Assurance is the foundation for confidence that an IT product meets its security objectives. It can be derived from a reference to sources, such as unsubstantiated assertions, prior relevant experience, or specific experience. However, the Common Criteria provides assurance through active investigation. Active investigation is an evaluation of the IT product in order to determine its security properties [Common Criteria Part 3, 2012].

Assurance through Evaluation

Evaluation is the traditional way of gaining assurance. It serves as the basis of the Common Criteria approach. Evaluation techniques can include, but are not limited to:

  1. Analysis and checking of processes and procedures

  2. Checking that processes and procedures are being applied

  3. Analysis of the correspondence between TOE design representations

  4. Analysis of the TOE design representation against the requirements

  5. Verification of proofs

  6. Analysis of guidance documents

  7. Analysis of functional tests developed and the results provided

  8. Independent functional testing

  9. Analysis for vulnerabilities, including flaw hypothesis

  10. Penetration testing

The Common Criteria Evaluation Assurance Scale

The Common Criteria philosophy asserts that greater assurance results from the application of greater evaluation effort, and that the goal is to apply the minimum effort required to provide the necessary level of assurance. The increasing level of effort is based on:

Scope - That is, the effort is greater because a larger portion of the IT product is included.

Depth - That is, the effort is greater because it is deployed to a finer level of design and implementation detail.

Rigor - That is, the effort is greater because it is applied in a more structured, formal manner.

The security architect should understand the basic EAL structure and levels as well as where to find evaluated products. Currently, a list of these evaluated products can be found on the National Information Assurance Partnership (NIAP) Web site31. This site contains the following types of information:

  1. List of evaluated products currently on the market, evaluating country, and the EAL level

  2. List of products in the test cycle

  3. List of products no longer on the active products list

  4. List of available protection profiles

  5. The Common Criteria Parts 1–3

  6. Other useful information about Common Criteria EAL and assurance

ISO/IEC 27000 Series

ISO/IEC 27000 is part of a growing family of ISO/IEC ISMS standards32. This series is the number reserved for a new international standard, which is titled: “Information technology - Security techniques - Information security management systems - Overview and vocabulary.” The standard is known informally as “ISO 27000.” The 27000 series of standards is being developed by a subcommittee of the Joint Technical Committee (JTC1) of the International Organization for Standardization and the International Electrotechnical Commission. ISO 27000 provides an overview of standards related to the ISO/IEC 27000 Information Security Management Systems (ISMS) family of standards that provide uniformity and consistency of fundamental terms and definitions (vocabulary) used throughout the ISMS family. Information security, similar to so many technical subjects, continues to develop a complex web of terminology. Relatively few authors take the trouble to define precisely what they mean, an approach that is unacceptable in the standards arena as it potentially leads to confusion and devalues formal assessment and certification. As with ISO 9000 and ISO 14000, the base “000” standard is intended to address this.

Although ISO/IEC 27001:2005 does not specifically address requirements analysis, organizations may require compliance with this document from the standpoint of implementing best security practices, which in turn may be interpreted as a requirement. Meeting ISO standards is generally good for business as they lend credibility to the company’s commitment to quality and excellence. Customers may seek out organizations that meet the ISO standards and may require compliance for their information systems.

ISO/IEC 27001:2005 covers a variety of organizations including commercial enterprises, government agencies, and nonprofit organizations. If compliance with this standard is made mandatory by the contract or statement of work, the security architect will need to evaluate the contents of this code of practice to ensure that they are adequately addressed. ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining, and improving a documented Information Security Management System within the context of the organization’s overall business risks. It specifies requirements for the implementation of security controls customized to the needs of individual organizations or parts thereof.

ISO/IEC 27001:2005 is designed to ensure the selection of adequate and proportionate security controls that protect information assets and give confidence to stakeholders. ISO/IEC 27001:2005 is intended to be suitable for:33

Images   Use within organizations to formulate security requirements and objectives.

Images   Use within organizations as a way to ensure that security risks are cost-effectively managed.

Images   Use within organizations to ensure compliance with laws and regulations.

Images   Use within an organization as a process framework for the implementation and management of controls to ensure that the specific security objectives of an organization are met.

Images   The definition of new information security management and governance processes.

Images   The identification and clarification of existing information security management processes.

Images   Use by the management of organizations to determine the status of information security management activities.

Images   Use by the internal and external auditors of organizations to determine the degree of compliance with the policies, directives, and standards adopted by an organization.

Images   Use by organizations to provide relevant information about information security policies, directives, standards, and procedures to trading partners and other organizations with whom they interact for operational or commercial reasons.

Images   The implementation of business-enabling information security.

Images   Use by organizations to provide relevant information about information security to customers.

Software Engineering Institute - Capability Maturity Model (CMMI-DEV) Key Practices Version 1.3
Introducing the Capability Maturity Model

The Capability Maturity Model (CMM) for Development is a framework that describes the key elements that make up a comprehensive integrated set of guidelines for developing products and services. The CMM describes an evolutionary improvement path from an ad hoc, immature process to a mature, disciplined process. The CMMI-DEV model provides guidance for applying Capability Maturity Model best practices in a development organization. Best practices in the model focus on activities for developing quality products and services to meet the needs of customers and end users. The CMMI-DEV, V1.3 model is a collection of development best practices from government and industry that is generated from the CMMI V1.3 Architecture and Framework. CMMI-DEV is based on the CMMI Model Foundation or CMF (i.e., model components common to all CMMI models and constellations). When followed, these key practices improve the ability of organizations to meet goals for cost, schedule, functionality, and product quality. It establishes a yardstick against which it is possible to judge, in a repeatable way, the maturity of an organization’s processes and compare it to the state of the practice of the industry [Paulk et al., 1993a]. The Capability Maturity Model may also be used by organizations and the security architect to plan improvements to its processes.

Sources of the Capability Maturity Model (CMM)

The Software Engineering Institute (SEI) developed an initial version of a maturity model and maturity questionnaire at the request of the government and with the assistance of the MITRE Corporation. Throughout the development of the model and the questionnaire, the SEI paid attention to advice from practitioners who are involved in developing and improving software processes. The objectives were to provide a model that:

Images   Is based on actual practices

Images   Reflects the best of the state of the practice

Images   Reflects the needs of individuals performing software process improvement, software process assessments, or software capability evaluations

Images   Is documented

Images   Is publicly available

Additional knowledge and insight into software process maturity has been gained since the earlier versions of the maturity model. This insight has been gained by:

Images   Performing and observing software process assessments and software capability evaluations

Images   Studying non software organizations

Images   Participating in meetings and workshops with industry and government representatives

Images   Soliciting and analyzing change requests to the model

Images   Soliciting feedback from industry and government reviewers

Generally, CMMs focus on improving processes in an organization. They contain the essential elements of effective processes for one or more disciplines and describe an evolutionary improvement path from ad hoc, immature processes to disciplined, mature processes with improved quality and effectiveness. Combining this general thought process regarding CMMs with the additional knowledge garnered from continuous examination and improvement of prior models, the CMM and its practices have been revised, creating CMM v1.3. Figure 4.2 illustrates the historical evolution of CMMs.

Images

Figure 4.2 - History of CMMs

Structure of the CMMI-DEV V1.334

CMMI® for Development (CMMI-DEV) consists of best practices that address development activities applied to products and services. It addresses practices that cover the product’s lifecycle from conception through delivery and maintenance. The emphasis is on the work necessary to build and maintain the total product.

CMMI-DEV contains 22 process areas. Of those process areas, 16 are core process areas, 1 is a shared process area, and 5 are development specific process areas.

All CMMI-DEV model practices focus on the activities of the developer organization. Five process areas focus on practices specific to development: addressing requirements development, technical solution, product integration, verification, and validation.

All CMMI models are produced from the CMMI Framework. This framework contains all of the goals and practices that are used to produce CMMI models that belong to CMMI constellations.

All CMMI models contain 16 core process areas. These process areas cover basic concepts that are fundamental to process improvement in any area of interest (i.e., acquisition, development, services).

Model components are grouped into three categories - required, expected, and informative.

Required components are essential to achieving process improvement in a given process area. This achievement must be visibly implemented in an organization’s processes. The required components in CMMI are the specific and generic goals. [CMMI-DEV, 2010].

Expected components describe the activities that are important in achieving a required CMMI component. Expected components guide those who implement improvements or perform appraisals. The expected components in CMMI are the specific and generic practices. Before goals can be considered to be satisfied, either their practices as described, or acceptable alternatives to them, must be present in the planned and implemented processes of the organization. [CMMI-DEV, 2010].

Informative components help model users understand CMMI required and expected components. These components can be example boxes, detailed explanations, or other helpful information. Subpractices, notes, references, goal titles, practice titles, sources, example work products, and generic practice elaborations are informative model components. The informative material plays an important role in understanding the model. It is often impossible to adequately describe the behavior required or expected of an organization using only a single goal or practice statement. The model’s informative material provides information necessary to achieve the correct understanding of goals and practices. [CMMI-DEV, 2010]. This structure of the CMMI is illustrated in Figure 4.3.

Images

Figure 4.3 - CMMI Model Components

Developing reliable and usable products and services that are delivered on time and within budget is a difficult endeavor for many organizations. Products that are late, over budget, or that do not work as expected also cause problems for the organization’s customers. As projects continue to increase in size and importance, these problems are amplified. They can be overcome through a focused and sustained effort at building a process infrastructure of effective engineering and management practices.

To build this process infrastructure, organizations producing products and services, such as software, need ways to appraise their ability to execute their process successfully. They also need guidance to improve their process capability. Customers, such as the United States’ Department of Defense (DoD), need ways to effectively evaluate an organization’s capability to perform successfully on engineering contracts. Prime contractors need ways to evaluate the capability of potential subcontractors.

To help organizations and customers such as the DoD and prime contractors, the Software Engineering Institute (SEI) developed the CMMI for Development, which delineates the characteristics of a mature, capable software process. The progression from an immature, unrepeatable software process to a mature, well-managed software process also is described in terms of maturity levels in the model. The CMMI for Development may be put to the following uses:

Images   Software process improvement, in which an organization plans, develops, and implements changes to its software process.

Images   Software process assessments, in which a trained team of software professionals determines the state of an organization’s current software process, determines the high-priority software process-related issues facing the organization, and obtains organizational support for software process improvement.

Images   Software capability evaluations, in which a trained team of professionals identifies contractors who are qualified to perform the software work or monitor the state of the software process used in an existing software effort.

The Software Engineering Institute’s CMMI for Development V1.3 document describes the key practices that correspond to each maturity level in the Capability Maturity Model. It provides detailed elaboration of what is meant by maturity at each level of the Capability Maturity Model and a guide that can be used for software process improvement, software process assessments, and software capability evaluations.

The key practices of the Capability Maturity Model are expressed in terms of what is expected to be the normal practices of organizations that work on large government contracts. In any context in which the CMMI-DEV model is applied, a reasonable interpretation of how the practices would be applied should be used.

Specifically, Chapter 4 of the CMMI for Development V1.3 document, Relationships Among Process Areas, provides insight into the meaning and interactions among the CMMI-DEV process areas, while Chapter 5, Using CMMI Models, describes paths to adoption and the use of CMMI for process improvement and benchmarking of practices in a development organization.

CMMI-DEV does not specify that a project or organization must follow a particular process flow or that a certain number of products be developed per day or specific performance targets be achieved. The model does specify that a project or organization should have processes that address development related practices. To determine whether these processes are in place, a project or organization maps its processes to the process areas in this model.

Images

Figure 4.4 - Structure of the Continuous and Staged Representations

The mapping of processes to process areas enables the organization to track its progress against the CMMI-DEV model as it updates or creates processes.

Levels are used in CMMI-DEV to describe an evolutionary path recommended for an organization that wants to improve the processes it uses to develop products or services. Levels can also be the outcome of the rating activity in appraisals. Appraisals can apply to entire organizations or to smaller groups such as a group of projects or a division. CMMI supports two improvement paths using levels. One path enables organizations to incrementally improve processes corresponding to an individual process area (or group of process areas) selected by the organization. The other path enables organizations to improve a set of related processes by incrementally addressing successive sets of process areas. These two improvement paths are associated with the two types of levels: capability levels and maturity levels.

These levels correspond to two approaches to process improvement called “representations.” The two representations are called “continuous” and “staged.” Using the continuous representation enables an organization to achieve “capability levels.” Using the staged representation enables an organization to achieve “maturity levels.” To reach a particular level, an organization must satisfy all of the goals of the process area or set of process areas that are targeted for improvement, regardless of whether it is a capability or a maturity level. Figure 4.4 represents the structures of the Continuous and Staged Representations.

Capability levels apply to an organization’s process improvement achievement in individual process areas. These levels are a means for incrementally improving the processes corresponding to a given process area. The four capability levels are numbered 0 through 3.

Maturity levels apply to an organization’s process improvement achievement across multiple process areas. These levels are a means of improving the processes corresponding to a given set of process areas (i.e., maturity level). The five maturity levels are numbered 1 through 5. Table 4.4 represents a comparison between the 4 Capability Levels and the 5 Maturity Levels.

Images

Table 4.4 - Comparison of Capability and Maturity Levels

Capability Levels 0 – 3 are discussed below.

Capability Level 0: Incomplete

An incomplete process is a process that either is not performed or is partially performed. One or more of the specific goals of the process area are not satisfied and no generic goals exist for this level since there is no reason to institutionalize a partially performed process. [CMMI-DEV, 2010].

Capability Level 1: Performed

A capability level 1 process is characterized as a performed process. A performed process is a process that accomplishes the needed work to produce work products; the specific goals of the process area are satisfied.

Although capability level 1 results in important improvements, those improvements can be lost over time if they are not institutionalized. The application of institutionalization (the CMMI generic practices at capability levels 2 and 3) helps to ensure that improvements are maintained. [CMMI-DEV, 2010].

Capability Level 2: Managed

A capability level 2 process is characterized as a managed process. A managed process is a performed process that is planned and executed in accordance with policy; employs skilled people having adequate resources to produce controlled outputs; involves relevant stakeholders; is monitored, controlled, and reviewed; and is evaluated for adherence to its process description.

The process discipline reflected by capability level 2 helps to ensure that existing practices are retained during times of stress. [CMMI-DEV, 2010].

Capability Level 3: Defined

A capability level 3 process is characterized as a defined process. A defined process is a managed process that is tailored from the organization’s set of standard processes according to the organization’s tailoring guidelines; has a maintained process description; and contributes process related experiences to the organizational process assets. [CMMI-DEV, 2010].

A critical distinction between capability levels 2 and 3 is the scope of standards, process descriptions, and procedures. At capability level 2, the standards, process descriptions, and procedures can be quite different in each specific instance of the process (e.g., on a particular project). At capability level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organizational unit and therefore are more consistent, except for the differences allowed by the tailoring guidelines. [CMMI-DEV, 2010].

Another critical distinction is that at capability level 3 processes are typically described more rigorously than at capability level 2. A defined process clearly states the purpose, inputs, entry criteria, activities, roles, measures, verification steps, outputs, and exit criteria. At capability level 3, processes are managed more proactively using an understanding of the interrelationships of the process activities and detailed measures of the process and its work products. [CMMI-DEV, 2010]. Maturity Levels 1 – 5 are discussed below.

Maturity Level 1: Initial

At maturity level 1, processes are usually ad hoc and chaotic. In spite of this chaos, maturity level 1 organizations often produce products and services that work, but they frequently exceed the budget and schedule documented in their plans.

Maturity level 1 organizations are characterized by a tendency to overcommit, abandon their processes in a time of crisis, and be unable to repeat their successes. [CMMI-DEV, 2010].

Maturity Level 2: Managed

At maturity level 2, the projects have ensured that processes are planned and executed in accordance with policy; the projects employ skilled people who have adequate resources to produce controlled outputs; involve relevant stakeholders; are monitored, controlled, and reviewed; and are evaluated for adherence to their process descriptions. The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans.

At maturity level 2, the status of the work products are visible to management at defined points (e.g., at major milestones, at the completion of major tasks). Commitments are established among relevant stakeholders and are revised as needed. Work products are appropriately controlled. The work products and services satisfy their specified process descriptions, standards, and procedures. [CMMI-DEV, 2010].

Maturity Level 3: Defined

At maturity level 3, processes are well characterized and understood, and are described in standards, procedures, tools, and methods. The organization’s set of standard processes, which is the basis for maturity level 3, is established and improved over time. These standard processes are used to establish consistency across the organization.

A critical distinction between maturity levels 2 and 3 is the scope of standards, process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures can be quite different in each specific instance of the process (e.g., on a particular project). At maturity level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines.

Another critical distinction is that at maturity level 3, processes are typically described more rigorously than at maturity level 2. A defined process clearly states the purpose, inputs, entry criteria, activities, roles, measures, verification steps, outputs, and exit criteria.

At maturity level 3, the organization further improves its processes that are related to the maturity level 2 process areas. [CMMI-DEV, 2010].

Maturity Level 4: Quantitatively Managed

At maturity level 4, the organization and projects establish quantitative objectives for quality and process performance and use them as criteria in managing projects. Quantitative objectives are based on the needs of the customer, end users, organization, and process implementers. Quality and process performance is understood in statistical terms and is managed throughout the life of projects. For selected subprocesses, specific measures of process performance are collected and statistically analyzed. When selecting subprocesses for analyses, it is critical to understand the relationships between different subprocesses and their impact on achieving the objectives for quality and process performance.

A critical distinction between maturity levels 3 and 4 is the predictability of process performance. At maturity level 4, the performance of projects and selected subprocesses is controlled using statistical and other quantitative techniques, and predictions are based, in part, on a statistical analysis of fine-grained process data. [CMMI-DEV, 2010].

Maturity Level 5: Optimizing

At maturity level 5, an organization continually improves its processes based on a quantitative understanding of its business objectives and performance needs. The organization uses a quantitative approach to understand the variation inherent in the process and the causes of process outcomes.

Maturity level 5 focuses on continually improving process performance through incremental and innovative process and technological improvements. The organization’s quality and process performance objectives are established, continually revised to reflect changing business objectives and organizational performance, and used as criteria in managing process improvement. The effects of deployed process improvements are measured using statistical and other quantitative techniques and compared to quality and process performance objectives. The project’s defined processes, the organization’s set of standard processes, and supporting technology are targets of measurable improvement activities.

A critical distinction between maturity levels 4 and 5 is the focus on managing and improving organizational performance. At maturity level 4, the organization and projects focus on understanding and controlling performance at the subprocess level and using the results to manage projects. At maturity level 5, the organization is concerned with overall organizational performance using data collected from multiple projects. Analysis of the data identifies shortfalls or gaps in performance. These gaps are used to drive organizational process improvement that generates measureable improvement in performance. [CMMI-DEV, 2010].

The Capability Maturity Model must be appropriately interpreted when the business environment of the organization differs significantly from that of a large contracting organization. The role of professional judgment in making informed use of the Capability Maturity Model must be recognized. The Software Engineering Institute’s CMMI for Development should be used by

Images   Organizations wanting to understand and improve their capability to develop software effectively

Images   Professionals wanting to understand the key practices that are part of effective processes for developing or maintaining software

Images   Anyone wanting to identify the key practices that are needed to achieve the next maturity level in the CMM

Images   Acquisition organizations or prime contractors wanting to identify the risks of having a particular organization perform the work of a contract

Images   Instructors preparing teams to perform software process assessments or software capability evaluations

Taking a sample project, such as a software engineering solution, and examining it through the lens of the Maturity Levels of the model as listed above, the specific activities and requirements by level will become clear. Begin at Maturity Level 2, as Level 1 is found to be in a chaotic state, and at best, luck is often the deciding factor between successful deployment and completion of a project and failure. Projects are often delivered over budget and late within Maturity Level 1 organizations, and it is almost impossible to recreate a successful deployment using the same methods and processes, as they are not documented and standardized.

Beginning at Maturity Level 2, processes must be repeatable in the areas of project planning, tracking, oversight, contracts management, QA, configuration management, process definition, and training. Requirements Management is established to foster a common understanding between the software project requirements and the customer. This involves establishing and maintaining an agreement with the customer on the requirements for the software project. The agreement covers both the technical and nontechnical requirements. It forms the basis for estimating, planning, performing, and tracking the software project’s activities throughout the software life cycle [Paulk93a].

The purpose of Software Project Planning is to establish reasonable plans for performing the software engineering and for managing the software project. Software Project Planning involves developing estimates for the work to be performed, establishing the necessary commitments, and defining the plan to perform the work.

The software planning begins with a statement of the work to be performed and other constraints and goals that define the software project. The software planning process includes steps to estimate the size of the software work products and the resources needed, produce a schedule, identify and assess software risks, and negotiate commitments. This plan provides the basis for performing and managing the software project’s activities and addresses the commitments to the software project’s customer according to the resources, constraints, and capabilities of the software project [Paulk93a].

Software Project Tracking and Oversight provides visibility into actual progress so that management can take effective actions when the software project’s performance deviates significantly from the plans. Project Tracking and Oversight involves tracking and reviewing the software accomplishments and results against documented estimates, commitments, and plans, and adjusting these plans based on the actual accomplishments and results.

A documented plan for the software project is used as the basis for tracking the software activities, communicating status, and revising plans. These activities are monitored by the management. Progress is determined by comparing the actual software size, effort, cost, and schedule to the plan when selected work products are completed and at selected milestones. Other Maturity Level 2 activities can include:

Images   Software Subcontract Management, which selects qualified software subcontractors and manage them effectively in cases where these activities are monitored and managed as if the work was done in-house.

Images   Software Quality Assurance, which involves reviewing and auditing the software products and activities to verify that they comply with the applicable procedures and standards as well as providing the project managers with the results of these reviews and audits.

Images   Software Configuration Management, which involves identifying the configuration of the software at given points in time, systematically controlling changes to the configuration, and maintaining the integrity and traceability of the configuration throughout the software life cycle.

A software baseline library is established containing the software baselines as they are developed. Changes to baselines and the release of software products built from the software baseline library are systematically controlled via the change control and configuration auditing functions of software configuration management. The practice of performing the software configuration management function identifies specific configuration items/units that are contained in the key process areas.

At Maturity Level 3, the process must be repeatable in all the Level 2 task areas and have defined processes for organizational process focus, process definition, training, integrated software management, intergroup coordination, and peer reviews. (Most medium to large organizations reach Level 3 with relative ease once they decide that CMM is a value worth the investment.)

Process definition involves developing and maintaining the organization’s standard software process, along with related process assets, such as descriptions of software life cycles, process tailoring guidelines and criteria, the organization’s software process database, and a library of software process-related documentation [Paulk93a].

These assets may be collected in many ways. For example, the descriptions of the software life cycles may be an integral part of the organization’s standard software process or parts of the library of software-process-related documentation that may be stored in the organization’s software process database. The organization’s software process assets are available for use in developing, implementing, and maintaining the projects’ defined software processes.

The training program is a key process area for developing the skills and knowledge of individuals so that they can perform their roles more effectively. Building training programs involves first identifying the training needed by the organization, projects, and individuals, and then developing or procuring training to address the identified needs.

The purpose of integrated software management is to integrate the software engineering and management activities into a coherent, defined software process that is tailored from the organization’s standard software process and related process assets, which are described in the organization process definition. The project’s defined software process is tailored from the organization’s standard software process to address the specific characteristics of the project. The software development plan is based on the project’s defined software process and describes how the activities of the project’s defined software process will be implemented and managed. The management of the software project’s size, effort, cost, schedule, staffing, and other resources is tied to the tasks of the project’s defined software process.

Because the projects’ defined software processes are all tailored from the organization’s standard software process, the software projects can share process data and lessons learned. The basic practices for estimating, planning, and tracking a software project are described in the Software Project Planning and Software Project Tracking and Oversight key process areas. They focus on recognizing problems when they occur and adjusting the plans or performance to address the problems. The practices of this key process area build on, and are in addition to, the practices of those two key process areas. The emphasis of Integrated Software Management shifts to anticipating problems and acting to prevent or minimize the effects of these problems.

Software Product Engineering should consistently execute a well-defined engineering process that integrates all the software engineering activities to produce correct, consistent software products effectively and efficiently. Software Product Engineering involves performing the engineering tasks to build and maintain the software using the project-defined software processes, methods, and tools. The software engineering tasks include:

  1. Analyzing the system requirements allocated to software

  2. Developing the software requirements

  3. Developing the software architecture

  4. Designing the software

  5. Implementing the software in the code

  6. Integrating the software components

  7. Testing the software to verify that it satisfies the specified requirements

Documentation needed to perform the software engineering tasks include the software requirements document, software design document, test plan, and test procedures. They are developed and reviewed to ensure that each task addresses the results of predecessor tasks, and that the results produced are appropriate for the subsequent tasks [Paulk93a].

Intergroup Coordination

Intergroup coordination establishes a means for the software engineering group to participate actively with the other engineering groups, so that the project is better able to satisfy the customer’s needs effectively and efficiently. It involves the software engineering group’s participation with other project engineering groups to address system-level requirements, objectives, and issues. Representatives of the project’s engineering groups participate in establishing the system-level requirements, objectives, and plans by working with the customer and end users, as appropriate. These requirements, objectives, and plans become the basis for all engineering activities.

The technical working interfaces and interactions between groups are planned and managed to ensure the quality and integrity of the entire system. Technical reviews and interchanges are regularly conducted with representatives of the project’s engineering groups to ensure that all engineering groups are aware of the status and plans of all the groups, and that system and intergroup issues receive appropriate attention.

Peer Reviews

Peer reviews are designed to remove defects from the software work products early and efficiently. An important corollary effect is to develop a better understanding of the software work products and of defects that might be prevented.

The specific products that will undergo a peer review are identified in the project’s defined software process and scheduled as part of the software project planning activities, as described in the Integrated Software Management key process area. This key process area covers the practices for performing peer reviews. The practices identifying the specific software work products that undergo peer review are contained in the key process areas that describe the development and maintenance of each software work product.

Maturity Level 4 processes must be managed. This level includes all the attributes of Level 2 and 3 and also includes quantitative process management and software quality management.

Quantitative Process Management involves establishing goals for the performance of the project’s defined software process, which is described in the Integrated Software Management key process area; taking measurements of the process performance; analyzing these measurements; and making adjustments to maintain process performance within acceptable limits. When the process performance is stabilized within acceptable limits, the project’s defined software process, the associated measurements, and the acceptable limits for the measurements are established as a baseline and used to control process performance quantitatively.

The organization collects process performance data from the software projects and uses these data to characterize the process capability (i.e., the process performance a new project can expect to attain) of the organization’s standard software process, which is described in the Organization Process Definition key process area. Process capability describes the range of expected results from following a software process (i.e., the most likely outcomes that are expected from the next software project the organization undertakes). This process capability data is, in turn, used by the software projects to establish and revise their process performance goals and to analyze the performance of the projects’ defined software processes.

Software Quality Management involves defining quality goals for the software products, establishing plans to achieve these goals, and monitoring and adjusting the software plans, software work products, activities, and quality goals to satisfy the needs and desires of the customer and end user for high-quality products.

This practice builds on the Integrated Software Management and Software Product Engineering key process areas, which establish and implement the project’s defined software process, and the Quantitative Process Management key process area. It establishes a quantitative understanding of the ability of the project’s defined software process to achieve the desired results. The goals are to establish software products based on the needs of the organization, the customer, and the end users. They are achieved by developing strategies and plans that address these goals.

Maturity Level 5 processes must optimize all of the Level 2 through 4 attributes as well as identifying the cause of defects and preventing them from occurring. The purpose of Technology Change Management is to identify new technologies (i.e., tools, methods, and processes) and track them into the organization in an orderly manner. It involves identifying, selecting, and evaluating new technologies, and incorporating effective technologies into the organization. The objective is to improve software quality, increase productivity, and decrease the cycle time for product development.

By maintaining an awareness of software-related technology innovations and systematically evaluating and experimenting with them, the organization selects appropriate technologies to improve the quality of its software and the productivity of its software activities. With appropriate sponsorship by the organization’s management, the selected technologies are incorporated into the organization’s standard software process and current projects, as appropriate, using pilot programs to assess new technologies. Other Level 5 activities include imparting Process Change Management and training to the organization’s standard software process (as described in the Organization Process Definition key process area); the projects’ defined software processes (as described in the Integrated Software Management key process area) resulting from these technology changes are handled as described in the Process Change Management key process area [Paulk93a].

Process Change Management involves defining process improvement goals and, with senior management sponsorship, proactively and systematically identifying, evaluating, and implementing improvements to the organization’s standard software process and the projects’ defined software processes on a continuous basis. Training and incentive programs are established to enable and encourage everyone in the organization to participate in process improvement activities. Improvement opportunities are identified and evaluated for potential payback to the organization. Pilot efforts are performed to assess process changes before they are incorporated into normal practice.

The review of the Maturity Levels of the CMMI for Development helps to put into perspective the necessary actions and structures that the security architect will need to contemplate and build as the enterprise strives to mature over time. In addition to the CMMI for Development, another of the SEI’s models, the CMMI for Services V1.3 will also be of interest to the security architect in this area, and should be investigated as part of a thorough planning solution to build out the enterprise architectures necessary to maintain and sustain directed growth and maturity over time, with an emphasis on secure, documented processes that are consistent and reproducible35.

ISO 749836

The purpose of this reference model of Open Systems Interconnection is to provide a common basis for the coordination of standards development for the purpose of systems interconnection, while allowing existing standards to be placed into perspective within the overall reference model. The term Open Systems Interconnection (OSI) qualifies standards for the exchange of information among systems that are “open” to one another for this purpose by virtue of their mutual use of the applicable standards.

This ISO standard does not specifically address requirements analysis, but the security architect should be familiar with its content when designing information systems. The fact that a system is open does not imply any particular systems implementation, technology, or means of interconnection, but refers to the mutual recognition and support of the applicable standards.

It is also the purpose of this reference model to identify areas for developing or improving standards, and to provide a common reference for maintaining consistency of all related standards. It is not the intent of this reference model to serve as an implementation specification. Nor is it a basis for appraising the conformance of actual implementations, or to provide a sufficient level of detail to precisely define the services and protocols of the interconnection architecture. Rather, this reference model provides a conceptual and functional framework that allows international teams of experts to work productively and independently on the development of standards for each layer of the OSI reference model.

The reference model has sufficient flexibility to accommodate advances in technology and expansion in user demands. This flexibility is also intended to allow the phased transition from existing implementations to OSI standards.

While the scope of the general architectural principles required for OSI is very broad, the reference model is primarily concerned with systems comprising terminals, computers, and associated devices and the means for transferring information between such devices. Other aspects of OSI requiring attention are described briefly. The description of the Basic Reference Model of OSI is developed in the following stages:

Images   Clause 4 - establishes the reasons for Open Systems Interconnection, defines what is being connected, the scope of the interconnection, and describes the modeling principles used in OSI.

Images   Clause 5 - describes the general nature of the architecture of the reference model; namely, that it is layered, what layering means, and the principles used to describe layers.

Images   Clause 6 - names and introduces the specific layers of the architecture.

Images   Clause 7 - provides the description of the specific layers.

Images   Clause 8 - provides the description of management aspects of OSI.

Images   Clause 9 - specifies compliance and consistency with the OSI reference model.

An indication of how the layers were chosen is given in Annex A to the Basic Reference Model. Additional aspects of this reference model beyond the basic aspects are described in several parts. The first part describes the Basic Reference Model. The second part describes the architecture for OSI Security. The third part describes OSI Naming and Addressing. The fourth describes OSI System Management.

The Basic Reference Model serves as a framework for the definition of services and protocols that fit within the boundaries established by the reference model. In those few cases where a feature is explicitly marked (optional) in the Basic Reference Model, it should remain optional in the corresponding service or protocol (even if at a given instant the two cases of the option are not yet documented).

The reference model does not specify services or protocols for OSI. It is neither implementation specific for systems nor a basis for appraising the conformance of implementations. For standards that meet the OSI requirements, a small number of practical subsets are defined from optional functions to facilitate implementation and compatibility.

Concepts of a Layered Architecture

Clause 5 sets forth the architectural concepts that are applied in the development of the reference model of OSI. First, the concept of a layered architecture (with layers, entities, service access points, protocols, connections, etc.) is described. Second, identifiers are introduced for entities, service access points, and connections. Third, service access points and data units are described. Fourth, elements of layer operation are described, including connections, transmission of data, and error functions. Then, routing aspects are introduced and, finally, management aspects are discussed.

The concepts described in clause 5 are those required to describe the reference model of Open Systems Interconnection. However, not all of the concepts described are employed in each layer of the reference model. There are four basic elements to the reference model:

  1. Open systems

  2. The application entities that exist within the OSI environment

  3. The associations that join the application entities and permit them to exchange information

  4. The physical media for OSI

Clause 6 states that when referring to these layers by name, the (N)-, (N + 1)-, and (N − 1)-prefixes are replaced by the names of the layers, for example, transport protocol, session entity, and network service.

Payment Card Industry Data Security Standard (PCI-DSS)

Payment Card Industry Data Security Standard (PCI-DSS) was developed by the major credit card companies as a guideline to help organizations that process card payments prevent credit card fraud, cracking, and various other security vulnerabilities and threats. A company processing, storing, or transmitting payment card data must be PCI-DSS compliant or risk losing their ability to process credit card payments and being audited or fined. Merchants and payment card service providers must validate their compliance periodically. This validation gets conducted by auditors (i.e., persons who are the PCI-DSS Qualified Security Assessor [QSAs]). Although individuals receive QSA status reports, compliance can only be signed off by an individual QSA on behalf of a PCI council-approved consultancy. Smaller companies, processing fewer than about 80,000 transactions a year, are allowed to perform a self-assessment questionnaire. The current version of the standard (2.0) specifies 12 requirements for compliance, organized into six logically related groups, which are called “Control Objectives.”

The control objectives and their requirements are the following:

Build and Maintain a Secure Network

Requirement 1: Install and maintain a firewall configuration to protect cardholder data.

Requirement 2: Do not use vendor-supplied defaults for system password and other security parameters.

Protect Cardholder Data

Requirement 3: Protect stored cardholder data.

Requirement 4: Encrypt transmission of cardholder data across open, public networks.

Maintain a Vulnerability Management Program

Requirement 5: Use and regularly update antivirus software or programs.

Requirement 6: Develop and maintain secure systems and applications.

Implement Strong Access Control Measures

Requirement 7: Restrict access to cardholder data by business need to know.

Requirement 8: Assign a unique ID to each person with computer access.

Requirement 9: Restrict physical access to cardholder data.

Regularly Monitor and Test Networks

Requirement 10: Track and monitor all access to network resources and cardholder data.

Requirement 11: Regularly test security systems and processes.

Maintain an Information Security Policy

Requirement 12: Maintain a policy that addresses information security for all personnel.

PCI-DSS originally began as five different programs: Visa Information Security Program, MasterCard Site Data Protection, American Express Data Security Operating Policy, Discover Information and Compliance, and the JCB Data Security Program. Each company’s intentions were roughly the same: to create an additional level of protection for customers by ensuring that merchants meet minimum levels of security when they store, process, and transmit cardholder data. The Payment Card Industry Security Standards Council (PCI-SSC) was formed and, on December 15, 2004, these companies aligned their individual policies and released the PCI-DSS.

In September 2006, the PCI standard was updated to version 1.1 to provide clarification and minor revisions to version 1.0. PCI is one of multiple data security standards that have emerged over the past decade: Basel II, Gramm–Leach–Bliley Act (GLBA), Health Insurance Portability and Accountability Act (HIPAA), Sarbanes–Oxley Act of 2002, and California Senate Bill 1386. Version 1.2 was released in October 2008. Standards derived from the PCI-DSS include Payment Application Best Practices (PABP), which was renamed Payment Application Data Security Standards (PA-DSS). The current version of the PCI standard, Version 2, was released in October of 201037.

Architectural Solutions

The security architect should be familiar with current architectures such as Service Oriented Architecture (SOA), Client Server Architecture, distributed centralized architectures, or database architectures. The functional architectures describe the types of data that need to be processed and transmitted as well as the bandwidth needed and whether or not the topology is hub and spoke, Ethernet, Star, FDDI, wireless, and so on. The security architect will need to develop a security architecture that complements and supports the functional architecture. The architecture must provide security mechanisms that implement the appropriate levels of security to ensure the confidentiality, integrity, availability, and accountability of the system.

The security architecture hardware and software may require an evaluation under the Common Criteria. A discussion on this type of evaluation is included in the next section. Security architectures include hardware and software people processes and environment as part of the overall information systems and security.

Best security practices should include an architecture that provides defense in depth where layers of technology are designed and implemented to provide data protection. These layers include people, technology, and operations (including processes and procedures). Defense in depth includes:

Images   Protect - preventative controls and mechanisms

Images   Detect - identify attacks, expect attacks

Images   React - respond to attacks, recover

Three primary elements of defense in depth include the following:

Images   People - They can defeat the most complex security at times due to a lack of knowledge of the policies. By nature, humans would rather trust than distrust others, but when excessive regulations are put in place, it looks as if there is a lack of trust. People are considered part of the defense-in-depth strategy because they are users of the system and must be able to use it safely and securely. People must be aware of policies, procedures, and safe security practices. Training the users in these aspects of the system can prevent inadvertent compromise of data and potential harm to the operational systems. Security awareness training can help prevent users from attempting to circumvent security on the system for convenience. Users should be part of the systems design and development team to give input and feedback on decisions that affect their access, operation, and use.

Images

Figure 4.5 - illustrates the components of a Defense-in-Depth solution

Images   Technology - Evaluation of products, Information Assurance (IA) architecture and standards, validation by a reputable third party, as well as configuration standards and guidance are all elements of implementing IT security technology. The security architect must keep abreast of current technology, and its capabilities to ensure the right security services and protections are included in the design. Within the defense-in-depth (DiD) technology framework, layered protection should be considered. The security architect can work from the desktop to the security perimeter or from the perimeter to the desktop. Security mechanisms such as access controls (i.e., userID and password, authentication mechanisms, virus protection, operating systems lockdown or systems hardening, intrusion detection systems, firewall, and various types of encryption mechanisms) are technologies to be considered.

Images   Operations - Security Policy, Certificate Authority (CA), Security Management, Key Management, Respond quickly, and restore critical services. The systems should be under configuration management controls to ensure that changes made to the operational system are authorized. Other considerations for operations are backup and recovery as well as incident response, awareness training, and management of encryption mechanisms and keys.

The effectiveness of information protection is based on the value of the information. In that way, decisions are based on risk analysis and aligned with the operational objectives of the organization. Key elements of the people side of the defense-in-depth equation are:

Images   Awareness training - ongoing

Images   Clearly written policy - that users can understand

Images   Consequences to the organization and individual - liability for management

Images   Incentive/reward

Table 4.5 illustrates the types of mechanisms that are required to defend the computing environment.

Defend

Mechanism or Process

Defend the computing environment

Access control

Defend the enclave boundaries

Firewalls and IDS

Defend the network and infrastructure

Protection from denial of service (DOS), inbound and outbound traffic protection

Defend the supporting infrastructures

Key management and other infrastructures

Table 4.5 - Information Management Tool

Images

Figure 4.6 - Security Architecture process

(From Hansche, S. The Official Guide to the CISSP-ISSEP CBK, New York: Auerbach Publications, 2005.)

There are many architectures to keep track of in modern IT systems. For example, let’s begin by looking at the outer perimeter of the network where security mechanisms, such as firewalls and intrusion detection systems are in place to provide filtering and monitoring to defend the network perimeter. Firewall devices filter incoming and outgoing IP traffic to permit or deny access based on a rule set or policy settings that are enabled or disabled within the device. These settings should be documented and protected to ensure that they can be duplicated should the need arise.

Encryption solutions such as Secure Socket Layer (SSL) or Transport Layer Security (TLS) may be used to protect the confidentiality of data being transmitted over the Internet. If the network is physically connected with dedicated leased lines, then other encryption appliances will be used to encrypt the point-to-point connections. These types of connections require network encryption devices as opposed to IP encryption devices.

The security architecture process depicted in Figure 4.6 shows the steps that should be taken to develop the systems and security architecture. The security architecture is closely integrated with the system functional architecture and supports those functional mechanisms based on a set of defined threats, vulnerabilities, customer and regulatory requirements, and best practices.

During the design process, countermeasure selections are made based on the results of a thorough functional and security analysis of the baseline requirements established during the early phases of the design. Countermeasure selection is a collaborative activity between the security architect and the security engineer. The architect defines the security features and conducts the analysis review. The security engineer develops and applies the detailed analysis to support the countermeasures selected. If an architecture framework was used to develop the architecture, such as U.S. Department of Defense (DoD) Architecture Framework (DoDAF), then a systems view Level 5A SV-5a might be used to show what services are being provided and how they are being secured. If the architects and engineers are not using a framework or model, then a matrix can be developed to serve the same purpose.

Determining whether the systems require redundant architecture elements depends on the data requirement, performance, and criticality of the data where the single points of failure in the architecture are located. For instance, if all the data is stored in a single database and this data is critical to the success of the organization, it would be wise to have backups and perhaps an alternate database that is updated frequently with the date. If the architecture calls for heavy use of the Internet or transmission of data to other sites, then redundant communications equipment and transmission paths might be necessary.

Enterprise Information Security architecture is a key component of the information security technology governance process at any organization of significant size. More and more companies are implementing a formal enterprise security architecture process to support the governance and management of IT. However, as noted in the opening paragraph of this article, it ideally relates more broadly to the practice of business optimization in that it addresses business security architecture, performance management, and process security architecture as well. Enterprise Information Security architecture is also related to IT security portfolio management and Metadata in the enterprise IT sense.

Architecture Frameworks

The enterprise architecture frameworks shown in Figure 4.7 are high-level depictions of the frameworks. There are numerous architecture frameworks, and the list continues to grow. They include architectural frameworks such as those listed in the following text as well as a number of reference architectures designed to provide fast development of typical network architectures for specific projects. The list provides a general idea of the type of architecture frameworks the security architect may want to become more familiar with as they engage in various projects based on customer needs and requirements. The frameworks include the following:

-   The U.S. Department of Defense (DoD) Architecture Framework (DoDAF)38

-   Zachman Framework39

-   U.S. Government Federal Enterprise Architecture Framework (FEAF) 40

-   The Open Group Architecture Framework (TOGAF) 41

-   Capgemini’s Integrated Architecture Framework42

-   The U.K. Ministry of Defense (MoD) Architecture Framework (MoDAF)43

-   National Institute of Health Enterprise Architecture Framework44

-   Open Security Architecture45

-   Sherwood Applied Business Security Architecture (SABSA) Framework and Methodology46

-   Service-Oriented Modeling Framework (SOMF)47

The Zachman Framework, the U.S. Department of Defense (DoD) Architecture Framework (DoDAF), and the U.S. Government Federal Enterprise Architecture Framework (FEAF) provide good examples for discussion. If the conceptual abstraction of Enterprise Information Security Architecture were simplified within a generic framework, each would be acceptable as a high-level conceptual architecture framework.

Figure 4.8 represents the information that links the operational view, systems and services view, and technical standards view. The three views and their interrelationships are driven by common architecture data elements that provide the basis for deriving measures such as interoperability or performance and for measuring the impact of the values of these metrics on operational mission and task effectiveness.

Images

Figure 4.7 - Sample Enterprise Architecture Frameworks

Department of Defense Architecture Framework (DoDAF)

DoDAF is the standard framework chosen by the U.S. Department of Defense to comply with the Clinger–Cohen Act and the U.S. Office of Management and Budget based on Circulars A-11 and A-130. It is administered by the Office of the DoD Deputy CIO Enterprise Architecture and Standards Directorate. Other derivative frameworks based on DoDAF include the NATO Architecture Framework (NAF) and Ministry of Defense United Kingdom Architecture Framework (MoDAF).

Similar to other enterprise architecture approaches, The Open Group Architecture Framework (ToGAF) and DoDAF are organized around a shared repository to hold work products. The repository is defined by the Core Architecture Data Model 2.0 (CADM) - (essentially a common database schema) and the DoD Architecture Repository System (DARS). A key feature of DoDAF is interoperability, which is organized as a series of levels, called Levels of Information System Interoperability (LISI). The developing system must not only meet its internal data needs but also those of the operational framework into which it is set. The current version of DoDAF 2.02 consists of 52 views organized into eight basic view sets:

Images   All Viewpoint (AV) - AV 1 and 2

Images   Capability Viewpoint (CV) - CV-1 through CV-7

Images   Data and Information Viewpoint (DIV) - DIV-1 through DIV-3

Images

Figure 4.8 - XYZ Inc. enterprise architecture.

Images   Operational Viewpoint (OV) - OV-1 through OV-6c

Images   Project Viewpoint (PV) - PV-1 through PV-3

Images   Services Viewpoint (SvcV) - SvcV-1 through SvcV-10c

Images   Standards Viewpoint (StdV) - StdV-1 through StdV-2

Images   Systems Viewpoint (SV) - SV-1 through SV-10c

Only a subset of the full DoDAF view set is usually created for each system development. A security architect using this model should plan on developing parallel views that focus specifically on developing the security architecture. For instance, the All View should best be depicted by overlaying the high-level security services, capabilities, and function over the functional architecture view. This method would apply to the Operational View as well. By taking this approach, the security architect establishes a link between the functional system and the security functionality.

Figure 4.9 shows XYZ Inc. (XYZnet), a small LAN in the context of an Operational View 1 (OV-1). The high-level graphic shows typical network connections over the Internet through WAN or MAN connections. These connections are to the customers, suppliers, and other stakeholders over the public switched telephone networks (PSTN).

The functional OV-1 for the XYZ Inc. shows the need to communicate with internal and external organizations. Security mechanisms typically are used to ensure that these networks operate securely, and safety would be overlaid on the functional OV-1 to show where security mechanisms and services would typically be placed. These security mechanisms would include firewall or demilitarized zone (DMZ) to filter or proxy incoming and outgoing traffic, intrusion detection systems (IDS) to monitor and analyze both malicious and legitimate activities, and virus protection to ensure that the latest virus signatures are blocked. The OV-1 is a typical DoDAF artifact that shows the overall connections to various networks.

Images

Figure 4.9 - Systems engineering architecture and security engineering relationships

The Zachman Framework

The Zachman Framework is a logical structure for identifying and organizing the descriptive representations that are important in the management of enterprises and to the development of the system, both automated and manual, that comprise them. It is a schema that represents the intersection between two classifications. The first is the fundamentals of communication found in the interrogatives: what, how, when, who, where, and why. It is the integration of answers to these questions that enables the comprehensive, composite description of complex ideas. The second is derived from reification, the transformation of an abstract idea into an instantiation, that was initially postulated by ancient Greek philosophers and is labeled in the Framework: Identification, Definition, Representation, Specification, Configuration, and Instantiation.

More specifically, the Zachman Framework is an ontology - a theory of the existence of a structured set of essential components of an object for which explicit expressions is necessary and perhaps even mandatory for creating, operating, and changing the object (the object being an enterprise, a department, a value chain, a “sliver,” a solution, a project, an airplane, a building, a product, and a profession of almost anything).

According to Zachman, this ontology was derived from analogous structures that are found in the older disciplines of architecture, construction, engineering, and manufacturing that classify and organize the design artifacts created in the process of designing and producing complex physical products (e.g., buildings or airplanes). It uses a two-dimensional classification model based on the six basic interrogatives (what, how, where, who, when, and why) intersecting six distinct perspectives, which relate to stakeholder groups (Planner, Owner, Designer, Builder, Implementer, and Worker). The intersecting cells of the framework correspond to models that, if documented, can provide a holistic view of the enterprise.

Design Process

The design process follows the systems engineering and architecture framework. These steps are discussed in the following section.

System Security Engineering Methodologies

Information System Security Engineering (ISSE) is the art and science of discovering users’ information protection needs and the designing and making of information systems, with economy and elegance, so that they can safely resist attacks, malicious activities, or other threats to which they may be subjected [IATF v3.0].

There must be an alignment of the SE and the ISSE. The Security Engineering and System Engineering steps take place at the same time. While the Systems Engineers are discovering the system needs, the system requirements the Systems Security Engineers are discovering the security needs and security requirements and so on. The System Engineering Process consists of the following six steps:

  1. Discover information protection needs - Ascertain why the system needs to be built and what information needs to be protected.

  2. Define system security requirements - Define the system in terms of what security is needed.

  3. Define system security architecture - Define the security functions needed to meet the specific security requirements. This process is the core of designing the security architecture.

  4. Develop detailed security design - Based on the security architecture, design the security functions and features for the system.

  5. Implement system security - Following the documented security design, build and implement the security functions and features for the system.

  6. Assess security effectiveness - Assess the degree to which the system, as it is defined, designed, and implemented, meets the security needs. This assessment activity occurs during and with all the other activities in the ISSE process.

In reviewing the details of the ISSE process, systems engineering, and architecture process, the information protection needs include developing an understanding of the customer’s mission or business needs. The security architect should work with the customer to determine what information management is needed to support the mission or business, and draft the Information Management Model (IMM) (shown in Table 4.6) and conduct a threat analysis. This will be the basis for creating an Information Protection Plan (IPP) that describes how this information will be protected. The results of these two activities should be documented in the Information Management Plan (IMP).

Information Domain

User

Rules/Privileges

Process

Information

Data entry

Data entry personnel

Read/write

Entry

Raw

Accept

Course coordinator/manager

Read/write

Accept

Analyzed

Distribute materials

Course coordinator

Read/write

Distribute

Releasable/processed

Review data

Manager

Read

Review/print reports

Processed

Table 4.6 - Information Management Tool

These results should support Certification and Accreditation by identifying the Accreditation Authority and any security oversight bodies and identify the classes of threats, security services, and design constraints.

Some common pitfalls of not adequately determining the information protection needs are:

  1. Poor working relationships with stakeholders

  2. Inadequate coordination/support

  3. Business understanding

  4. Excessive categorization

  5. Failure to document critical information

  6. Excessive documentation

The next step is to understand the Systems Security Requirements. These requirements will drive the system development and include functional, contractual, or regulatory requirements. These requirements are most often found in Statements of Work (SOW), Statement of Requirements (SOR), Statements of Objectives (SOO), or Service Level Agreements (SLAs). A key of this model is a feedback loop, and the key stakeholders are involved in each process.

Design Validation

Design validation is done during phase 2 of the systems development life cycle. It ensures that the design meets the security requirements that were allocated to the original baseline for the development. Validation is often done in accordance with an established design specification, functional capability, security policy, customer requirements, or any combination of these.

Design validation requires the development of a test and evaluation plan. These plans are often called Security Tests and Evaluation Plans or Certification Tests and Evaluation Plans. They include test methods that identify how the test is conducted, the criteria for testing, and how the requirement is met. The test and evaluation methods include functional demonstration of the security feature, documentation that includes policies, and procedures that describe how the system, feature or capability is operated. Other methods may include electronic instrumented tests and interviews with employees, managers, and users of the system.

Step 3 activities define the system’s security architecture. This design is based on a thorough understanding of the system’s Security Concept of Operations (SECONOP), which discusses how the system’s security mechanisms will be implemented48. The security functions needed to meet the specific security requirements called for in the specifications should also include the system security modes of operation. Will the system be a stand-alone, closed LAN not connected to the Internet or will it have Internet access with a variety of users with different types of need to know and user roles. Once this is established, trade-off studies should be conducted to ensure that the appropriate hardware and software are selected. Some products may require Common Criteria-evaluated products, as discussed earlier. Finally, the security architect should be involved in assessing the information protection effectiveness. These are the core processes involved in designing the security architecture.

Step 4 involves developing the detailed architecture so that security services can be allocated to architecture by selecting the appropriate security mechanisms. The architecture is then submitted for evaluation by the test and evaluation team. The architecture is revised to accommodate mission capabilities and requirements that may need adjusting in order to function properly. A risk analysis is conducted to evaluate all potential vulnerabilities that may be caused by relaxing or restraining the configuration settings. The results are provided to customers to obtain concurrence that the architecture meets their needs.

The security architect also participates in the various decisions that are taken at particular development milestone Preliminary Design Reviews, Critical Design Reviews, Systems Readiness Reviews, and Technical Interchange meetings. A variety of working groups may also be established to resolve design or program management issues that occur throughout the design process. These working group meetings and the like should include the oversight committees, accreditors, and systems certifiers in support of the Certification and Accreditation process if one is required.

Certification

Certification has been defined as the comprehensive evaluation of the security features and functions of an information system. It provides evidence that the system is configured to provide the most effective security protection based on specific security policies, standards, or industry best practices. Certification can be done based on a variety of public security policies or standards. A list of some of the certification policies include but are not limited to the following:

  1. Health Insurance Portability and Accountability Act (HIPAA)

  2. NIST Certification and Accreditation Process (NIST SP 800-37)

  3. ISO/IEC 27002, Certification for commercial information systems

  4. Sarbanes–Oxley Act of 2002

  5. Payment Card Industry Data Security Standard (PCI-DSS)

  6. U.S Department of Defense Information Assurance Certification and Accreditation Process (DIACAP)49

Peer Reviews

Peer review processes exist in many companies, particularly those that have attained CMMI for Development Maturity Level 3 or higher. They may be automated or manually implemented. If they are automated, they contain processes and procedures for inviting peers to the review, prework such as reviewing the document or code prior to meeting, recording the defects, or problems with the article under review. The chair of the peer review should assign roles to individuals supporting the review.

Peer review is a methodical examination of a work product by the author’s peers to provide comments and to identify and categorize defects in the work product as efficiently and as early in the life cycle as possible (see Figure 4.10). The peer review process requires planning, advanced preparation, discussion, recording, measurement, rework, and follow-up. It typically reviews contract deliverable documentation for internal and external customers based primarily on materials (PDR and CDR charts). Program Planning Documents (e.g., SEP, HDP, Tailoring Matrix) include the requirements for a peer review process. Design and Development of work products (Requirements, Design description, Module Design Specifications, Board layout, S/W Code, Trade Studies, Plans and Procedures) should all be peer-reviewed.

Images

Figure 4.7 - Peer review process.

There are several types of peer reviews:

Images   Formal inspection - Typically used for initial deliveries of contractually required products.

Images   Structured walkthrough - Typically used for complex products, where an off-line review may not be efficient. Product is reviewed in real time during a meeting.

Images   Critique - Used for simple products, small changes, or use of the product limited to internal customers.

Peer review teams can be composed of the following types of personnel:

Images   Manager/team lead - Ultimately responsible for the product; ensures that the product is ready for PR; assists author in identifying reviewers.

Images   Moderator - In a formal or walkthrough review, conducts the PR meeting; is responsible for identifying predicted defects using the Inspection Calculator and for determining if a reinspection is required.

Images   Author - Generates the product, schedules the PR, and incorporates comments.

Images   Recorder - In a formal or walkthrough review, records defect data during the meeting.

Images   Reviewers - Relevant stakeholders review the work product to identify defects and potential issues.

Images   Reader - In a walkthrough review, presents the item under review.

Images   Quality Engineer - Ensures the PR process is followed, and ensures compliance with standards and conventions.

The benefit of having a peer review process is to ensure that a quality product is delivered to the customer without any defects. This reduces rework and shows the customer that the organization is conducting due diligence in product delivery.

Audit reports are sometimes referred to as Security Test and Evaluation Reports or certification reports. They are assessments of the system’s security mechanisms and services based on the predefined requirements that were implemented in the system during the development process. They can range from running simple risk assessment tools to full-blown penetration testing and reporting the findings or results back to the system owners or regulators. The level of the evaluations is dependent on the sensitivity of the information being processed or stored on the system.

Audit reports provide evidence to the system owners and regulators that the system has met the prescribed specifications for security assurance. The reports point out the strengths and weaknesses of the system and provide recommendations to correct any deficiencies. Sometimes, the deficiencies are so severe that the recommendation is not to allow the system to proceed to operation. When this happens, it is often because of a failure to implement a countermeasure to provide adequate protection to the system. Deficiencies are often ranked as to the severity of the vulnerability. For instance, when a technical protection mechanism does not work, a procedural one might do the trick in the short term. When deficiencies are less severe, the system may be allowed to operate as long as those deficiencies are corrected within a prescribed timeline. The ultimate goal of the audit report is to determine if the system meets a level of risk the organization deems sufficient for secure operation.

Following the documented security design, build, and implementation of the security functions and features of the system, the security architect works with the systems engineer and security engineer to ensure that all the requirements of the system have been satisfied. They support the system certification and assist in verifying the interoperability of security tools and mechanisms against the security design and evaluate security components against the evaluation criteria as well as the integration and configuration of components.

Finally, the security architect supports the build, test, and evaluation strategy for the system. This may include developing test plans and procedures using the demonstration, observation, analysis, and testing methods. The security architect assesses available test and evaluation data for applicability, supports development of test and evaluation procedures and evaluation activities to ensure that the security design is implemented correctly, and conducts or updates a final risk analysis. If certification and accreditation (C&A) is required, the security architect ensures the completeness required for C&A documentation with the customer and the customer’s certifiers and accreditors or regulators. Security training on the system may also be required, and the security architect may be called upon to support development of security training material.

Documentation

A variety of documents is usually required as part of the program administration. The security architect will have input into these documents and may be directly responsible for developing program documents or providing input to the list of documents being developed. Document development may be required by policy or regulation as part of the contract. The documents list includes but is not limited to the following:

Images   IMM - Information Management Model

Images   MNS - Mission Needs Statement (what is the overall mission and need for this product)

Images   IPP - Information Protection Policy

Images   PNE - Protection Needs Elicitation (appendix H of the IATF)

Images   CONOPS - Concept of Operations (user perspective and functional/technical - sometimes on large projects the security is removed and made a different document)

Images   SSP - System Security Plans

Images   Security Architecture - Discusses the security functions, safeguards, and configurations

Some common pitfalls of this security architecture and design:

Images   The security architecture is not compatible with system architecture, which means the wrong mechanisms may have been selected for a particular function or provide inadequate security protections.

Images   The security is not integrated with non-security functionality or use of modified Commercial Off-The-Shelf (COTS) /Government Off-The-Shelf (GOTS) products.

Images   Occasionally, a poorly documented architecture, meant to clearly describe the costs/benefits of the security design elements or provide adequate design context or rationale, may leave the customer in doubt as to whether the system meets their specification, so it is important to ensure that the documentation is complete and is written to address the appropriate audience.

Summary   Images

This domain covers the information essential to understanding requirements analysis and security standards/guidelines criteria and their elements. This chapter has discussed requirement analysis, current architectures, and solutions, systems engineering methodologies, methods of attack vectors, design validation, and legal and regulatory requirements.

Requirements analysis is one of the most important activities of systems security architecture and systems development. Security requirements are based on industry best practices, regulatory statutes, customer needs, and contractual obligations. Proper analysis of the requirements can prove to be critical to developing and delivering a secure system that is usable and provides customer satisfaction. The security architect begins by gathering requirements from customers who submit Statements of Work (SOW), contract documents, system specifications, Service Level Agreements (SLAs), Requests for Proposals (RFPs), legal and regulatory documents from federal, state, local, or international policy, or other documents such as security best practices and industry standards for safety and security. The security architect should be able to gather the rele-vant requirements and determine at a high level the best solutions to satisfy those requirements and the impact they may have on the system as the system development life cycle progresses.

The security architect should be intimately familiar with the most common types of attacks in order to select countermeasures that will combat these attacks. An attack vector is a path or means by which an attacker can gain access to a computer or network server in order to deliver a payload or malicious outcome. Attack vectors enable hackers to exploit system vulnerabilities, including the human element. This includes viruses, e-mail attachments, Web pages, pop-up windows, instant messages, chat rooms, and deception. All of these methods involve programming or, in a few cases, hardware, except deception, in which a human operator is fooled into removing or weakening system defenses. It is unrealistic to think that 100% protection against all possible threats, at all times, is attainable or desirable. The objective is to develop a system that allows the customer to conduct their business or mission at an acceptable level of residual risk.

There are a number of methods and techniques used by the security architect to ensure that the requirements are satisfied. These include using Common Criteria-evaluated products when required for security relevant applications and services, engaging architecture processes such as DoDAF, MoDAF, FEAF, or Zachman, and supporting the Systems Security Engineering process. The ISO/IECs and RFCs can also be helpful in providing best practice guidance. A secure information system is best achieved when the security architect practices due diligence and pays attention to details of standards, threats awareness, risks identification, and the value of the data. In all cases, the security architect should be aware of the relevant policies, procedures, and techniques needed to architect an information system using defense in depth.

Images   Review Questions

Bradley, M., Data valuation: rethinking “one size fits all” data protection - Storage Networking, Computer Technology Review, Jan., 2003.

Capability Maturity Model (CMM) Key Practices Version 1.1

CMMI Product Team, (2010) CMMI® for Development, Version 1.3 (CMMI-DEV, V1.3), Improving processes for developing better products and services, Technical Report Software Engineering Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 CMU/SEI-2010-TR-033 ESC-TR-2010-033.

Common Criteria (CC) for Information Technology Security Evaluation Part 1: Introduction and general model September 2006 Version 3.1, Revision 1.

Common Criteria (CC) for Information Technology Security Evaluation Part 1: Introduction and general model September 2012 Version 3.1, Revision 4.

Common Criteria (CC) for Information Technology Security Evaluation Part 2: Security Functional Component, September 2006 Version 3.1, Revision 1.

Common Criteria (CC) for Information Technology Security Evaluation Part 2: Security Functional Component, September 2012 Version 3.1, Revision 4.

Common Criteria (CC) for Information Technology Security Evaluation Part 3: Security Assurance Component, September 2006 Version 3.1 Revision 1.

Common Criteria (CC) for Information Technology Security Evaluation Part 3: Security Assurance Component, September 2012 Version 3.1 Revision 4.

Hansche, S., The Official Guide to the CISSP-ISSEP CBKTM, Auerbach Publications, Boca Raton, FL, 2005

Happy Trails Computer Club, http://cybercoyote.org/index.shtml

Information Assurance Technical Framework IATF V.3.0

IT Law Group, http://www.itlawgroup.com/Resources/Archives.html, Accessed Nov. 2009.

Paulk, M. C., Weber, C. V., Garcia, S. M., Chrissis M. B., and Bush, M. (1993a) Key Practices of the Capability Maturity ModelSM, Version 1.1, Software Engineering Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 CMU/SEI-93-TR-25 ESC-TR-93-178.

Paulk, M. C., Curtis, B., Chrissis M. B., Weber, C. V. (1993b), Capability Maturity Model for Software, Version 1.1, Technical Report, CMU/SEI-93-TR-024, ESC-TR-93-177, February 1993.

Images   Review Questions

1. The approach in which policies, procedures, technology, and personnel are considered in the system security development process is called

  1. defense in depth.

  2. requirements analysis.

  3. risk assessment.

  4. attack vectors.

2. Software that adds hidden components to a system without end user knowledge is

  1. Virus.

  2. Spyware.

  3. Adware.

  4. Malware.

3. Risk is assessed by which of the following formulas?

  1. Risk = Vulnerability × Threat × Impact Divided by Countermeasure

  2. Risk = Annual Loss Opportunity ÷ Single Loss Expectancy

  3. Risk = Exposure Facture divided by Asset Value

  4. Risk = Vulnerability × Annual Loss Expectancy

4. Requirements definition is a process that should be completed in the following order:

  1. Document, identify, verify, and validate.

  2. Identify, verify, validate, and document.

  3. Characterize, analyze, validate, and verify.

  4. Analyze, verify, validate, and characterize.

5. A path by which a malicious actor gains access to a computer or network in order to deliver a malicious payload is a

  1. penetration test.

  2. attack vector.

  3. vulnerability assessment.

  4. risk assessment.

6. Which of the following is BEST as a guide for the development, evaluation, and/or procurement of IT products with security functionality?

  1. ISO/IEC 27001

  2. FIPS 140-2

  3. Common Criteria

  4. SEI-CMM

7. Which of the following BEST defines evaluation criteria for Protection Profile (PP) and Security Target (ST) and presents evaluation assurance levels rating assurance for the TOE?

  1. Part 3—Security assurance requirements

  2. Part 2—Security functional requirements

  3. Part 1—Introduction and general model

  4. Part 4—History and previous versions

8. The National Voluntary Laboratory Accreditation Program (NVLAP) must be in full conformance with which of the following standards?

  1. ISO/IEC 27001 and 27002

  2. ISO/IEC 17025 and Guide 58

  3. NIST SP 800-53A

  4. ANSI/ISO/IEC Standard 17024

9. A software application in combination with an operating system, a workstation, smart card integrated circuit, or cryptographic processor would be considered examples of a

  1. Functional Communications (FCO)

  2. Functional Trusted Path (FTP)

  3. Target of Evaluation (TOE)

  4. Security Target (ST)

10. A security architect requires a device with a moderate level of independently assured security, and a thorough investigation of the TOE and its development without substantial reengineering. It should be evaluated at which CC EAL?

  1. EAL6

  2. EAL5

  3. EAL4

  4. EAL3

11. At which Common Criteria EAL would a security architect select a device appropriate for application in extremely high-risk situations or where the high value of the assets justifies the higher costs?

  1. EAL4

  2. EAL5

  3. EAL6

  4. EAL7

12. A list of Common Criteria–evaluated products can be found on the Internet on the site at the

  1. NIAP

  2. CCEVS

  3. IASE

  4. CERIS

13. Which of the following describes the purpose of the Capability Maturity Model?

  1. Determine business practices to ensure creditability for the company’s commitment to quality and excellence.

  2. Provide assurance through active investigation and evaluation of the IT product in order to determine its security properties.

  3. Establish a metric to judge in a repeatable way the maturity of an organization’s software process as compared to the state of the industry practice.

  4. Provide an overview of standards related to the Information Security Management family for uniformity and consistency of fundamental terms and definitions.

14. Which one of the following describes the key practices that correspond to a range of maturity levels 1–5?

  1. Common Criteria

  2. SEI-CMM

  3. ISO/IEC 27002

  4. IATF v3

15. Which of the following CMMI levels include quantitative process management and software quality management as the capstone activity?

  1. CMMI Level 5

  2. CMMI Level 4

  3. CMMI Level 3

  4. CMMI Level 2

16. Where can the general principles of the OSI Reference Model architecture be found that describes the OSI layers and what layering means?

  1. Clause 3

  2. Clause 5

  3. Clause 7

  4. Clause 9

17. A privately held toy company processing, storing, or transmitting payment card data must be compliant with which of the following?

  1. Gramm–Leach–Bliley Act (GLBA)

  2. Health Insurance Portability and Accountability Act (HIPAA)

  3. Sarbanes–Oxley Act of 2002

  4. PCI-DSS

18. In which phase of the IATF does formal risk assessment begin?

  1. Assess effectiveness

  2. Design system security architecture

  3. Define system security requirements

  4. Discover information protection needs

19. Which of the following describes a methodical examination of a work product by the author’s coworkers to comment, identify, and categorize defects in the work product?

  1. Formal inspection

  2. Structured walkthrough

  3. Critique

  4. Peer review

20. Which of the following is a critical element in the design validation phase?

  1. Develop security test and evaluation plan

  2. Develop protection needs elicitation

  3. Develop the concept of operation

  4. Requirements analysis

 

1   There are many different methodologies for risk analysis that exist, and that could potentially be used to examine risk within the context of the enterprise. While small and narrowly defined subsets are discussed in this conversation, some additional examples not discussed are listed here for reference:

Qualitative Methodologies

-   Preliminary Risk Analysis

-   Hazard and Operability studies(HAZOP)

-   Failure Mode and Effects Analysis(FMEA/FMECA)

Tree Based Techniques

-   Fault tree analysis

-   Event tree analysis

-   Cause-Consequence Analysis

-   Management Oversight Risk Tree

-   Safety Management Organization Review Technique

Techniques for Dynamic system

-   Go Method

-   Digraph/Fault Graph

-   Markov Modeling

-   Dynamic Event Logic Analytical Methodology

-   Dynamic Event Tree Analysis Method

2   See the following for an overview of the OCTAVE methodology: http://www.cert.org/octave/

3   See the following for the NIST Special Publications on 800-30: (original through revision 1)

  1. Original NIST SP 800-30 (July 2002): http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf

  2. NIST SP 800-30 initial public draft for revision 1 (September 2011): http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf

  3. NIST SP 800-30 revision 1( September 2012): http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf

4   See the following for an overview of the ISO/IEC 27005 standard: http://www.27000.org/iso-27005.htm

5   For a further discussion of this topic, as well as an overview of the literature in this area that supports the assertion that incomplete or missing requirements are the major reasons for unsuccessful projects, please see the following: http://proceedings.informingscience.org/InSITE2008/IISITv5p543-551Davey466.pdf

6   The most commonly seen types of attacks in enterprise networks can be grouped into the following categories and types:

-   Keyloggers and the use of stolen credentials

-   Backdoors and command control

-   Tampering

-   Pretexting

-   Phishing

-   Brute force

-   SQL injection

For supporting data regarding the rise in attack payload categories pertaining to malware, please see the following

http://www.verizonbusiness.com/resources/reports/rp_data-breach-investigations-report-2012_en_xg.pdf

http://www.verizonbusiness.com/resources/reports/rp_data-breach-investigations-report-2011_en_xg.pdf

http://www.verizonbusiness.com/resources/reports/rp_2010-data-breach-report_en_xg.pdf

7   For a general overview of BYOD issues, and data points regarding the advent of the phenomena and its impacts on enterprise system security, see the following:

  1. http://www.juniper.net/us/en/local/pdf/additional-resources/jnpr-2011-mobile-threats-report.pdf#search=%22Juniper%20Mobile%20Security%20Report%202011%22

  2. http://www.sans.org/reading_room/analysts_program/mobility-sec-survey.pdf

8   For the original CERT vulnerability listing for the Microsoft Windows DCOM/RPC vulnerability, listed in October 2003, see the following: http://www.kb.cert.org/vuls/id/547820

9   For a good historical overview of worms, and their specific behaviors, see the following: http://lyle.smu.edu/~tchen/papers/network-worms.pdf

10   For a discussion of the anatomy of ransomware attacks at a high level, see the following: http://www.slate.com/articles/technology/technology/2012/10/ransomware_hackers_new_trick_to_take_over_your_computer_and_blackmail_you_for_cash_.html

For a discussion of the specifics of the ransomware attack involving GoDaddy and DNS server record manipulation, see the following: http://nakedsecurity.sophos.com/2012/11/23/hacked-go-daddy-ransomware/

11   For instance, the metasploit framework, and associated toolkits, could easily be used to carry out something such as the following macro based attack. Create a malicious .docx file which will spawn a tcp shell on any port specified, 12345 for instance, upon simply opening the file.

See the following for an overview of Metasploit: http://www.metasploit.com/about/choose-right-edition/

12   For an overview discussion of the guidelines for information asset valuation see the following: http://www.iso27001security.com/ISO27k_Guideline_on_information_asset_valuation.pdf

13   For an overview of the Business Impact Analysis (BIA) methodology, see the following: http://www.ready.gov/business-impact-analysis

14   http://www.propertycasualty360.com/2009/11/11/experts-say-small-firms-lag-in-disaster-planning

15   For an overview of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) see the following: http://www.hhs.gov/ocr/privacy/index.html

16   For an overview of the Sarbanes-Oxley Act of 2002 (SARBOX) see the following: http://www.soxlaw.com/

17   For an overview of the Payment Card Industry Data Security Standard (PCI DSS) see the following: https://www.pcisecuritystandards.org/security_standards/documents.php

To download the PCI DSS v2 standard see the following: https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf

18   For the original text of the Gramm-Leach-Bliley Act of the 106th US Congress, enacted on November 12 1999, see the following: http://www.gpo.gov/fdsys/pkg/PLAW-106publ102/html/PLAW-106publ102.htm

19   For information on the Federal Information Security Act of 2002 (FISMA) see the following links:

  1. For the overarching Act that enables FISMA as a subset of the broader E-Government Act of 2002 see: http://csrc.nist.gov/drivers/documents/HR2458-final.pdf

  2. For the FISMA specific section of the broader E-Government Act of 2002, commonly referred to as Title III, see: http://csrc.nist.gov/drivers/documents/FISMA-final.pdf

  3. For general information on FISMA and all FISMA related activities, see: http://csrc.nist.gov/groups/SMA/fisma/overview.html

20   For the original text of Directive 95/46/EC of the European Parliament on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data (commonly called the Data Protection Directive), see the following:

http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML

21   For an overview of Directive 2002/58/EC Concerning the Processing of Personal Data and the Protection of Privacy in the Electronic Communications Sector, see the following: http://europa.eu/legislation_summaries/information_society/legislative_framework/l24120_en.htm

For the original text of Directive 2002/58/EC Concerning the Processing of Personal Data and the Protection of Privacy in the Electronic Communications Sector, see the following:

http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32002L0058:EN:NOT

22   On January 25, 2012, the EU Commission published drafts of two documents that are the main pillars of the proposed reform taking place around the EU data protection regime: (i) a Regulation “on the protection of individuals with regard to the processing of personal data and on the free movement of such data” (“Regulation”) and (ii) a Directive “on the processing of certain criminal data by competent authorities” (“Directive on Processing Criminal Data by Authorities”). These reforms are not due to go into effect before the Spring of 2015 at the earliest. For a summary of the proposed changes and a high level discussion of their possible impact see the following:

http://www.orrick.com/publications/item.asp?action=article&articleID=4489

23   For a complete overview of each book in the Rainbow Series, as well as for the actual text of each book in the series, see the following: https://www.fas.org/irp/nsa/rainbow.htm

24   For the most up to date information on the make-up of the membership of the CCRA, see the following: http://www.commoncriteriaportal.org/ccra/members/

25   For the most up to date information on the list of certified laboratories worldwide see the following: http://www.commoncriteriaportal.org/labs/

26   For information and an overview on NIAP see the following: http://www.niap-ccevs.org/cc-scheme/

27   For information on the NVLAP see the following: http://www.nist.gov/nvlap/

28   For the most up to date information on NVLAP accredited laboratories see the following: http://ts.nist.gov/standards/scopes/programs.htm

29   The complete Common Criteria document set for the September 2012 V3.1 R4 release can be accessed for download here: http://www.commoncriteriaportal.org/cc/

30   The full citation is as follows: Common Criteria for Information Security Technology Evaluation | Part 3: Security assurance components, September 2012 Version 3.1 Revision 4.

31   The NIAP web site can be found here: http://www.niap-ccevs.org/

32   See the following for the entire ISO/IEC ISMS standards body: http://www.iso.org/iso/home/store/catalogue_ics.htm

33   See the following for the abstract of the ISO/IEC 27001:2005 Standard: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=42103

34   See the following for complete information on the Software Engineering Institute and the CMMI for Development V1.3 model: http://www.sei.cmu.edu/library/abstracts/reports/10tr033.cfm

35   See the following for the Software Engineering Institutes CMMI for Services V1.3: http://www.sei.cmu.edu/library/abstracts/reports/10tr034.cfm

36   See the following for the ISO 7498 Standard: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=20269

37   See footnote 16 for detailed information on the current PCI version 2 Standard.

38   See the following for information on the current version of DoDAf, Version 2.02: http://dodcio.defense.gov/dodaf20.aspx

39   See the following for information on the current version of the Zachman Framework, Version 3.0: http://www.zachman.com/about-the-zachman-framework

40   See the following for information on the current version of the Federal Enterprise Architecture and its associated reference models: http://www.whitehouse.gov/omb/e-gov/fea

41   See the following for information on the current version of TOGAF, Version 9.1: http://www.opengroup.org/togaf/

42   See the following for information on Capgemini and the Integrated Architecture Framework: http://www.us.capgemini.com/services-and-solutions/technology/soa/soa-solutions/ent_architecture/iaf/

43   See the following for information on the current version of MoDAF, Version 1.2.004: http://www.mod.uk/defenceinternet/aboutdefence/whatwedo/informationmanagement/modaf/

44   See the following for information about the National Institute of Health Enterprise Architecture: https://enterprisearchitecture.nih.gov/Pages/Framework.aspx

45   See the following for information on the Open Security Architecture: http://www.opensecurityarchitecture.org/cms/

46   See the following for information on SABSA: http://www.sabsa.org/home.aspx

47   See the following for information on SOMF: http://www.modelingconcepts.com/pages/download.htm

48   “A Security Concept of Operations (SECONOP) may be included in the System Security Plan. The CONOPS shall at a minimum include a description of the purpose of the system, a description of the system architecture, the system’s accreditation schedule, the system’s Protection Level, integrity Level-of-Concern, availability Level-of-Concern, and a description of the factors that determine the system’s Protection Level, integrity Level-of-Concern, and availability Level-of-Concern. “ See the following for the supporting document | DIRECTOR OF CENTRAL INTELLIGENCE DIRECTIVE 6/3PROTECTING SENSITIVE COMPARTMENTED INFORMATION WITHIN INFORMATION SYSTEMS (appendices) http://www.fas.org/irp/offdocs/6-3_20Appendices.htm

49   http://www.dtic.mil/whs/directives/corres/pdf/851001p.pdf

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset