Chapter 8. Vulnerability Management
Before we discuss Payment Card Industry (PCI) requirements related to vulnerability management in deep, and find out what technical and nontechnical safeguards are prescribed there and how to address them, we need to address one underlying and confusing issue of defining some of the terms that the PCI Data Security Standard (DSS) documentation relies upon.
These are as follows:
■ Vulnerability assessment
■ Penetration testing
■ Testing of controls, limitations, and restrictions
■ Preventing vulnerabilities via secure coding practices
Defining vulnerability assessment is a little tricky, since the term has evolved over the years. The authors prefer to define it as a process of finding and assessing vulnerabilities on a set of systems or a network, which is a very broad definition. By the way, the term vulnerability is typically used to mean a software flaw or a weakness that makes the software susceptible to an attack or abuse. In the realm of information security, vulnerability assessment is usually understood to be a vulnerability scan of the network with a scanner, implemented as installable software, dedicated hardware appliance, or a scanning software-as-a-service (SaaS). Sometimes using the term network vulnerability assessment adds more clarity to this. Terms network vulnerability scanning or network vulnerability testing are usually understood to mean the same. In addition, the separate term application vulnerability assessment is typically understood to mean an assessment of application-level vulnerabilities in a particular application; most frequently, a Web-based application deployed publicly on the Internet or internally on the intranet. A separate tool, called application vulnerability scanner (as opposed to the network vulnerability scanner mentioned above), is used to perform application security assessment. By the way, concepts such as port scan, protocol scan, and network service identification belong to the domain of network vulnerability scanning, whereas concepts such as site crawl, Hypertext Transfer Protocol (HTTP) requests, cross-site scripting, and ActiveX belong to the domain of Web application scanning. We will cover both types in this chapter as they are mandated by the PCI DSS, albeit in different requirements (6 and 11).
Penetration testing is usually understood to mean an attempt to break into the network by a dedicated team, which can use the network and application scanning tools mentioned above, and also other nontechnical means such as dumpster diving (i.e., looking for confidential information in the trash), social engineering (i.e., attempting to subvert authorized information technology [IT] users to give out their access credential and other confidential information). Sometimes, penetration testers might rely on other techniques and methods, such as custom-written attack tools.
Testing of controls, mentioned in Requirement 11.1, does not have a simple definition. Sometimes referred to as a “site assessment,” such testing implies either an in-depth assessment of security practices and controls by a team of outside experts or a self-assessment by a company's own staff. Such control assessment will likely not include attempts to break into the network.
Preventing vulnerabilities, covered in Requirement 6, addresses the vulnerability management by assuring that newly created software does not contain the known flaws and problems. Requirements 5, 6, and 11 also mandate various protection technologies, such as antivirus, Web firewalls, intrusion detection and prevention, and others.
The core of PCI Requirement 11 covered in this chapter covers all the above and more. The requirement covers all the above types of testing and some of the practices that help mitigate the impact of problems, such as the use of intrusion prevention tools. Such practices fall into broad domains of vulnerability management and threat management. Although there are common definitions of vulnerability management (covered below), threat management is typically defined ad hoc as “dealing with threats to information assets.”

PCI DSS Requirements Covered

Vulnerability management controls are present in PCI DSS Requirements 5, 6, and 11.
■ PCI Requirement 5 “Use and regularly update anti-virus software or programs” covers antimalware measures; these are tangentially related to what is commonly seen as vulnerability management, but it helps deal with the impact of vulnerabilities.
■ PCI Requirement 6 “Develop and maintain secure systems and applications” covers a broad range of application security subjects, application vulnerability scanning, secure software development, etc.
■ PCI Requirement 11 “Regularly test security systems and processes” covers a broad range of security testing, including network vulnerability scanning by approved scanning vendors (ASVs), internal scanning, and other requirements. We will focus on Requirements 11.2 and 11.3 in this chapter.

Vulnerability Management in PCI

Before we start our discussion of the role of vulnerability management for PCI compliance, we need to briefly discuss what is commonly covered under vulnerability management in the domain of information security. It appears that some industry pundits have proclaimed that vulnerability management is simple: just patch all those pesky software problems and you are done. Others struggle with it because the scope of platforms and applications to patch and other weaknesses to rectify is out of control in most large organizations with compliance networks and large numbers of different products. The problems move from intractable to downright scary when you consider all the Web applications being developed in the world of Web 2.0, including all the in-house development projects, outsourced development efforts, partner development, etc. Such applications may never get that much needed patch from the vendor because you are simultaneously a user and a vendor; a code change by your own engineers might be the only way to solve the issue.
Thus, vulnerability management is not the same as just keeping your systems patched; it expands into software security and application security, secure development practices, and other adjacent domains. If you are busy every first Tuesday when Microsoft releases its batch of patches, but not doing anything to eliminate a broad range of application vulnerabilities during the other 29 days in a month, you are not managing your vulnerabilities efficiently, if at all. Vulnerability management was a mix of technical and nontechnical process even in the time when patching was most of what organizations needed to do to stay secure; nowadays, it is even more of a process that touches an even bigger part of your organization: not only network group, system group, desktop group, but also your development and development partners, and possibly even individual businesses deploying their own, possible “in the cloud” applications (it is not unheard of that such applications will handle or contain payment card data).
Clearly, vulnerability management is not only about technology and “patching the holes.” As everybody in the security industry knows, technology for discovering vulnerabilities is getting better everyday; for instance, Qualys vulnerability scanning today offers 99.997 percent accuracy [1]. Moreover, the missing of vulnerability scanning is broadening; the same technology is also used to detect configuration errors and nonvulnerability security issues. For instance, a fully patched system is still highly vulnerable if it has a blank administrator or root password, even though no patch is missing. The other benefit derived from vulnerability management is the detection of “rogue” hosts, which are sometimes deployed by business units and are sitting outside of control of your IT department, and thus, might not be deployed in adherence with PCI DSS requirements. One of the basic tenants of adhering to the PCI standard is to limit the scope of PCI by strictly segmenting and controlling the cardholder data environment (CDE). Proper implementation of internal and external vulnerability scanning can assist in maintaining a pristine CDE.
As a result, it would be useful to define vulnerability management as managing the lifecycle of processes and technologies needed to discover and then reduce (or, ideally, remove) the vulnerabilities in software required for the business to operate, and thus, bring the risk to business information to the acceptable threshold.
Network vulnerability scanners can detect vulnerabilities from the network side with high accuracy and from the host side with even better accuracy. Host-side detection is typically accomplished via internal scans by using credentials to log into the systems (the so-called “authenticated” or “trusted” scanning), so configuration files, registry entries, file versions, etc. can be read; thus, increasing the accuracy of results. Such scanning is performed only from inside the network, not from the Internet.

Note
Sometimes vulnerability scanning tools would be capable of running trusted or authenticated scanning where the scanner tools will actually log into the system, just like the regular user would, and then perform the search for vulnerabilities. If you successfully run a trusted or authenticated scan from the Internet and discover configuration issues and vulnerabilities, you have a serious issue because no one should be able to directly log into hosts or network devices directly from the Internet side, whether to network device or servers. Believe it or not, this has happened in real production environments, subject to PCI DSS! Also, PCI ASV scanning procedures prohibit authentication scanning when performing ASV scan validation. Authenticated or trusted scans are extremely useful for PCI DSS compliance and security, but they should always be performed from inside the network perimeter.
However, many organizations that implemented periodic vulnerability scanning have discovered that the volumes of data far exceed their expectations and abilities. A quick scan-then-fix approach turns into an endless wheel of pain; such wheel of pain is obviously more dramatic for PCI DSS external scanning because you have no choice in fixing vulnerability, which leads to validation failure (we will review the exact criteria for failure below). Many free and low-cost commercial-vulnerability scanners suffer from this more than their more advanced brethren; thus, exacerbating the problem for price-sensitive organizations such as smaller merchants. Using vulnerability scanners efficiently presents other challenges, including having network visibility of the critical systems, perceived or real impact on the network bandwidth, as well as system stability. Overall, it is more clear that vulnerability management involves more process than technology and should be based on the overall risk and not simply on the volume of incoming scanner data.

Stages of Vulnerability Management Process

Let's outline some critical stages of the vulnerability management process. Even though Gartner analysts have defined [2] that the vulnerability management process includes the steps below; vulnerability management starts from software creation when vulnerabilities are actually introduced. Thus, investing in secure coding practices (prescribed in Requirement 6) helps make the vulnerability management life cycle much less painful; it is the only choice for the application created within your organization. The following steps are commonly viewed as composing the vulnerability management process [2]:
1. Policy definition is the first step and includes defining the desired state for device configurations, user identity, and resource access.
2. Baseline your environment to identify vulnerabilities and policy compliance.
3. Prioritize mitigation activities based on external threat information, internal security posture, and asset classification.
4. Shield the environment, prior to eliminating the vulnerability, by using desktop and network security tools.
5. Mitigate the vulnerability and eliminate the root causes.
6. Maintain and continually monitor the environment for deviations from policy and to identify new vulnerabilities [2].

Policy Definition

Indeed, the vulnerability management process starts from the policy definition that covers organization's assets, such as systems and applications and their users, as well as partners, customers, and whoever touches the resources. Such documents and the accompanying detailed security procedures define the scope of the vulnerability management effort and postulate a “known good” state of those IT resources. Policy creation should involve business and technology teams, as well as senior management who would be responsible for the overall compliance. PCI DSS requirements directly affect such policy documents and mandate its creation (see Requirement 12 that states that one needs to “maintain a policy that addresses information security”). For example, marking the assets that are in scope for PCI compliance is also part of this step.

Data Acquisition

The data acquisition process comes next. A network vulnerability scanner or an agent-based host scanner is a common choice. Both excellent freeware and commercial solutions are available. In addition, emerging standards for vulnerability information collection, such as OVAL (http://oval.mitre.org), established standards for vulnerability naming, such as CVE (http://cve.mitre.org) and vulnerability scoring, such as Common Vulnerability Scoring System ([CVSS], www.first.org/cvss) can help provide a consistent way to encode vulnerabilities, weaknesses, and organization-specific policy violations across the popular computing platforms. Moreover, an effort by US National Institute of Standards and technology called Security Content Automation Protocol (SCAP) is underway to combine the above standards into a joint standard bundle to enable more automation of vulnerability management. See http://scap.nist.org for more details on SCAP.
Scanning for compliance purposes is somewhat different from scanning for remediation. Namely, PCI DSS reports that Qualified Security Assessor (QSA) will ask to validate your compliance should show the list of systems that were scanned for all PCI-relevant vulnerability, as well as an indication that no systems has any of the PCI-relevant vulnerabilities. Scanning tools also provide support for Requirements 1 and 2 secure-system configurations and many other requirements described below. This shows that while “scanning for remediation” only requires a list of vulnerable systems with their vulnerabilities, “scanning for compliance” also calls for having a list of systems found not to be vulnerable.

Prioritization

The next phase, prioritization, is a key phase in the entire process. It is highly likely that even with a well-defined specific scan policy (which is derived from PCI requirements, of course) and a quality vulnerability scanner, the amount of data on various vulnerabilities from a large organization will be enormous. Even looking at the in-scope systems might lead to such data deluge; it is not uncommon for an organization with a flat network to have thousands of systems in scope for PCI DSS. No organization will likely “fix” all the problems, especially if their remediation is not mandated by explicit rules; some kind of prioritization will occur. Various estimates indicate that even applying a periodic batch of Windows patches (“black” Tuesday) often takes longer than a period between patch releases (longer than 1 month). Accordingly, there is a chance that the organization will not finish the previous patching round before the next one rushes in. To intelligently prioritize vulnerabilities for remediation, you need to take into account various factors about your own IT environment as well as the outside world. Ideally, such prioritization should not only be based on PCI DSS but also on organization's view and approach to information risk. Also, even when working within PCI DSS scope, it makes sense to fix vulnerability with higher risk to card data first, even if this is not mandated by PCI DSS standards. A recent document from PCI Council “Prioritized Approach for PCI DSS 1.2”[3] mandates that a passing ASV scan is obtained in Phase 2 out of 6 phases of PCI DSS implementation, next only to removing the storage of prohibited data.
Those include the following:
■ Specific regulatory requirement: fix all medium- and high-severity vulnerabilities as indicated by the scanning vendor; fix all vulnerabilities that can lead to Structured Query Language (SQL) injection, cross-site scripting attacks, etc.
■ Vulnerability severity for the environment: fix all other vulnerabilities on publicly exposed and then on other in-scope systems.
■ Related threat information and threat relevance: fix all vulnerabilities on the frequently attacked systems.
■ Business value and role information about the target system: address vulnerabilities on high-value critical servers.
To formalize such prioritization, one can use the CVSS (www.first.org/CVSS), which takes into account various vulnerability properties such as priority, exploitability, and impact, as well as multiple, local site-specific properties. The CVSS scheme offers a way to provide a uniform way of scoring vulnerabilities on a scale from 0 to 10. PCI DSS mandates the use of CVSS by ASVs; moreover, PCI validation scanning prescribes that all vulnerability with the score equal to or higher than 4.0 must be fixed to pass the scan. National Vulnerability Database (NVD) located at http://nvd.nist.org provides CVSS scores for many publicly disclosed vulnerabilities (see Fig. 8.1 below).
B9781597494991000131/f08-01-9781597494991.jpg is missing
Figure 8.1
National Vulnerability Database

Mitigation

The next phase of mitigation is important in many environments where immediate patching or reconfiguration is impossible, such as a critical server running unusual or custom-built applications. Despite the above, in some cases, when a worm is out or a novel attack is being seen in similar environments, protecting such a system becomes unavoidable. In this case, one immediately needs to do something to mitigate the vulnerability temporarily. This step might be performed by a host or network intrusion prevention system (IPS); sometimes even a firewall blocking a network port will do. The important question here is choosing the best mitigation strategy, which will also not create additional risk by blocking legitimate business transactions. In case of Web application vulnerability, a separate dedicated device, a Web application firewall, needs to be deployed in addition to the traditional network security safeguards such as firewalls, filtering routers, IPSs, etc.
In this context, using antivirus and intrusion prevention technologies might be seen as part of vulnerability mitigation because these technologies help protect companies from the vulnerability exploitation (either by malware or human attackers).
Ideally, all vulnerabilities that impact card data needs to be fixed, such as patched or remediated in some other way in the order prescribed by the above prioritization procedure, and taking into account the steps we took to temporarily mitigate the vulnerability above. In a large environment, it is not simply the question of “let's go patch the server.” Often, a complicated workflow with multiple approval points and regression testing on different systems is required.
To make sure that vulnerability management becomes an ongoing process, an organization should monitor the vulnerability management process on an ongoing basis. This involves looking at the implemented technical and process controls aimed at decreasing risk. Such monitoring goes beyond vulnerability management into other security management areas. It is also important to be able to report to senior management about the progress.
It should be added that vulnerability management is not a panacea even after all the “known” vulnerabilities are remediated. “Zero-day” attacks, which use vulnerabilities with no resolution publicly available, will still be able to cause damage. Such cases need to be addressed by using the principle of “defense in-depth” during the security infrastructure design.
Now, we will walk through all the requirements in PCI DSS guidance that are related to vulnerability management. We should again note that vulnerability management guidance is spread across Requirements 5, 6, and 11.

Requirement 5 Walk-Through

While antivirus solutions have little to do with finding and fixing vulnerabilities, in PCI DSS, they are covered under the broad umbrella definition of vulnerability management. One might be able to argue that antivirus solutions help when vulnerability is present and is being exploited by malicious software such as a computer virus, worm, Trojan horse, or spyware. Thus, antivirus tools help mitigate the consequences of exploited vulnerabilities in some scenarios.
Requirement 5.1 mandates the organization to “use and regularly update antivirus software or programs.” Indeed, many antivirus vendors moved to daily (and some to hourly) updates of their virus definitions. Needless to say, virus protection software is next to useless without an up-to-date malware definition set.
PCI creators wisely chose to avoid the trap of saying “antivirus must be on all systems,” but instead chose to state that one needs to “Deploy antivirus software on all systems commonly affected by malicious software (particularly personal computers and servers).” This ends up causing a ton of confusion, and in many cases, companies fight deploying antivirus, even when the operating system manufacturer recommends it (for example, Apple's OS X). A good rule of thumb is to deploy it on all Microsoft Windows machines and any desktop machine with users regularly accessing untrusted networks (like the Internet) that have an antimalware solution.
Subsection 5.1.1 states that one needs to “Ensure that all antivirus programs are capable of detecting, removing, and protecting against all known types of malicious software.” They spell out all the detection, protection, and removal of various types of malicious software, knowing full well that such protection is highly desirable, but not really achievable, given the current state of malware research. In fact, recent evidence points that guaranteeing that an antivirus product will protect you from all the malware, is becoming less certain everyday as more backdoors, Trojans, rootkits, and other forms of malware enter the scene, where virus and worms once reigned supreme.
Finally, Section 5.2 drives the point home: “ensure that all antivirus mechanisms are current, actively running, and capable of generating audit logs.” This combines three different requirements, which are sometimes overlooked by organizations that deployed antivirus products. First, they need to be current – updated as frequently as their vendor is able to push updates. Daily, not weekly or monthly, is a standard now. Second, if you deploy a virus-protection tool and then the virus or even an “innocent” system reconfiguration killed or disabled the security tool, no protection is present. Thus, the running status of security tools needs to be monitored. Third, as mentioned in Chapter 9, “Logging Events and Monitoring the Cardholder Data Environment,” audit logs are critical for PCI compliance. This section reminds PCI implementers that antivirus tools also need to generate logs, and such logs need to be reviewed in accordance with Requirement 10. Expect your Assessor to ask for logs from your antimalware solution to substantiate this requirement.

What to Do to Be Secure and Compliant?

Requirement 5 offers simple and obvious action items:
1. Deploy antivirus software on in-scope systems, wherever such software is available and wherever the system can suffer from malware. Free antivirus products can be downloaded from several vendors such as AVG (go to http://free.avg.com to get the software) or Avast (go to www.avast.com to get it). It is reported that the next version of Windows will include its own antimalware defenses. This will help with Requirement 5.1.
2. Configure the obtained antimalware software to update at least daily. Please forget the security advice from the 1990s when weekly updates were seen as sufficient. Daily is the minimum acceptable update frequency. This will deal with Requirement 5.1.1.
3. Verify that your antivirus software can generate audit logs. This will take care of Requirement 5.2. Please refer to Chapter 9, “Logging Events and Monitoring the Cardholder Data Environment,” to learn how to deal with all the logs, including antivirus logs.

Note
For example, Symantec AntiVirus will log all detections by default; there is no need to “enable logging.” To preserve, please make sure that the setting shown in Fig. 8.2 allows your centralized log collection system to get the logs before they are deleted.
B9781597494991000131/f08-02-9781597494991.jpg is missing
Figure 8.2
Antivirus Log Setting

Requirement 6 Walk-Through

Another requirement of PCI covered under the vulnerability management umbrella is Requirement 6, which covers the need to “develop and maintain secure systems and applications.” Thus, it touches vulnerability management from another side: making sure that those pesky flaws and holes never appear in software in the first place. At the same time, this requirement covers the need to plan and execute a patch-management program to assure that, once discovered, the flaws are corrected via software vendor patches or other means. In addition, it deals with a requirement to scan applications, especially Web applications, for vulnerabilities.
Thus, one finds three types of requirements in Requirement 6: those that help you patch the holes in commercial applications, those that help you prevent holes in the in-house developed applications, and those that deal with verifying the security of Web application (Requirement 6.6). It states to “Ensure that all system components and softwares have the latest vendor-supplied security patches installed”[4]. Then, it covers the prescribed way of dealing with vulnerabilities in custom, homegrown applications, lies in careful application of secure coding techniques, and incorporating them into a standard software-development lifecycle. Specifically, the document says “for in-house developed applications, numerous vulnerabilities can be avoided by using standard system-development processes and secure coding techniques.” Finally, it addresses the need to test the security of publicly exposed Web applications by mandating that “for public-facing Web applications, address new threats, and vulnerabilities on an ongoing basis.”
Apart from requiring that organizations “ensure that all system components and software have the latest vendor-supplied security patches installed,” Requirement 6.1 attempts to settle the debates in security industry, which is between a need for prompt patching in case of an imminent threat and a need for careful patch testing. They take the simplistic approach of saying that one must “install relevant security patches within one month of release.” Such an approach, while obviously “PCI-compliant,” might sometimes be problematic: one month is way too long in case of a worm outbreak (all vulnerable systems will be firmly in the hands of the attackers), and on the other hand, too short in case of complicated mission-critical systems and overworked IT staff. Therefore, after PCI DSS was released, a later clarification was added, which explicitly mentions a risk-based approach. Specifically, “An organization may consider applying a risk-based approach to prioritize their patch installations”; this can allow an organization to get an extension to the above one-month deadline “to ensure high-priority systems and devices are addressed within one month, and less-critical devices and systems are addressed within 3 months.”
Further, Requirement 6.2 prescribes “establishing a process to identify newly discovered security vulnerabilities.” Note that this doesn't mean “scanning for vulnerabilities” in your environment, but looking for newly discovered vulnerabilities via vulnerability alert services, some of which are free such as the one from Secunia (see www.secunia.com), whereas others such as VeriSign's iDefense Threat Intelligence Service can be highly customized to only send alerts applicable to your environment, and also fixes vulnerabilities that are not public, are not free, but may surely be worth the money paid for them. One can also monitor the public mailing lists for vulnerability information (BugTraq is a primary example: www.securityfocus.com/archive/1), which usually requires a significant time commitment.

Note
If you decide to use public mailing lists, you need to have a list of all operating systems and commercial software that is in-scope. You may want to set up a specific mailbox that multiple team members have access to, so new vulnerabilities are not “missed” when someone is out of the office. Checking these lists as part of your normal Security Operation Center (SOC) analyst duties can help ensure this activity regularly takes place. In fact, this is even explicitly mandated in PCI DSS, Requirement 12.2: “Develop daily operational security procedures that are consistent with requirements in this specification.” Even if your organization is small and does not have a SOC, checking the lists and services frequently will help satisfy this requirement.
Other aspects of your vulnerability management program apply to securing the software developed in-house. Section 6.3 states that one needs to “develop software applications based on industry best practices and incorporate information security throughout the software-development life cycle.” The unfortunate truth, however, is that there is no single authoritative source for such security “best practices” and, at the same time, current software “industry best practices” rarely include “information security throughout the software-development life cycle.” Here are some recent examples of projects that aim at standardizing security programming best practices, which are freely available for download and contain detailed technical guidance:
■ BSIMM “The Building Security In Maturity Model”; see www.bsi-mm.com/
■ OWASP “Secure Coding Principles”; see www.owasp.org/index.php/Secure_Coding_Principles
■ SANS and MITRE “CWE/SANS TOP 25 Most Dangerous Programming Errors”; see www.sans.org/top25errors/ or http://cwe.mitre.org/top25/
■ SAFECode “Fundamental Practices for Secure Software Development”; see www.safecode.org/
Detailed coverage of secure programming topic goes far beyond the scope of this book.
In detail, Section 6.3 goes over software development and maintenance practices. Requirement 6.3 mandates that for PCI compliance, an organization must “develop software applications in accordance with PCI DSS (for example, secure authentication and logging) and based on industry best practices, and incorporate information security throughout the software development lifecycle.” This guidance is obviously quite unclear and should be clarified in future versions of PCI DSS standard; as of today, the burden of making the judgment call is on the QSAs, who are not always experts in secure application development lifecycle.
Let's review some of the subrequirements of 6.3, which are clear, specific, and leave the overall theme of “following industry best practices” to your particular QSA.
So, specifically, 6.3.1 covers the known risky areas in its “Testing of all security patches, and system and software configuration changes before deployment” guidance, namely, input validation (6.3.1.1), error handling (6.3.1.2), stored of encryption keys (6.3.1.3), and others.
The next one simply presents security common sense (Requirement 6.3.2): “Separate development/test and production environments.” This is very important because some recent attacks penetrated the publicly available development and testing or staging sites.
A key requirement 6.3.4, which states “production data (live primary account numbers [PANs]) are not used for testing or development,” is one that is the most critical and also the most commonly violated with the most disastrous consequences. Many companies found its data stolen because their developers moved the data from the more secure production environment to a much less-protected test environment, as well as on mobile devices (laptops), remote offices, and so on.
On the contrary, contaminating the production environment with test code, utilities, and accounts is also critical (and was known to lead to just as disastrous compromises of production data) and is covered in Sections 6.3.5 and 6.3.6, which regulate the use of “test data and accounts” and prerelease “custom application accounts” “custom code.” Similarly, recent attackers have focused on looking for left over admin login, test code, hard-coded password, etc.
The next requirement is absolutely a key to application security (6.3.7): “Review of custom code prior to release to production or customers to identify any potential coding vulnerability.” Please also pay attention to the clarification to this requirement: “This requirement for code reviews applies to all custom codes (both internal and public-facing), as part of the system development life cycle required by PCI DSS Requirement 6.3. This mandates application security code review for in-scope and public applications.
Further, Section 6.4 covers a critical area of IT governance: change control. Change control can be considered a vulnerability management measure because unpredicted, unauthorized changes often lead to opening vulnerabilities in both custom and off-the-shelf software and systems. It states that one must “follow change control procedures for all system and software configuration changes,” and even helps the organization define what the proper procedures must include:
■ 6.4.1: Documentation of impact
■ 6.4.2: Management sign-off by appropriate parties
■ 6.4.3: Testing of operational functionality
■ 6.4.4: Back-out procedures
The simple way to remember is: if you change something somewhere in your IT environment, document it. Whether it is a bound notebook (small company) or a change control system (large company) is secondary, leaving a record is primary.
To put this into context, most other IT governance frameworks, such as COBIT (www.isaca.org/cobit) or ITIL (www.itil.co.uk/), cover change control as one of the most significant areas that directly affect system security. Indeed, having documentation and sign-off for changes and an ability to “undo” things will help achieve both security and operation goals by reducing the risk, and striving toward operational excellence. Please refer to Chapter 14, “PCI and Other Laws, Mandates, and Frameworks,” to learn how to combine multiple compliance efforts.
Another critical area of PCI DSS covers Web application security; it is contained in Sections 6.5 and 6.6 that go together when implementing compliance controls.

Web-Application Security and Web Vulnerabilities

Section 6.5 covers Web applications, because it is the type of application that will more likely be developed in-house and, at the same time, more likely to exposed to the hostile Internet (a killer combo – likely less-skilled programmers with larger number of malicious attackers!). Fewer organizations choose to write their own Windows or Unix software from scratch compared to those creating or customizing Web application frameworks.
Requirement 6.5 points toward the Open Web Application Security Project (OWASP) as the main source of secure Web application programming guidance. The OWASP “Secure Coding Principles” document mentioned above covers the issues leading to critical Web application vulnerabilities. Such vulnerabilities are covered in another OWASP document called Top Ten Web Application Security Issues (www.owasp.org/index.php/OWASP_Top_Ten_Project). In addition, it also calls to “review custom application code to identify coding vulnerabilities.” Although a detailed review of secure coding goes much beyond the scope of this book, there are many other books devoted to the subject. Also, detailed coverage of secure Web application programming, Web application security, and methods for discovering Web site vulnerabilities goes well beyond the scope of this book. See Hacking Exposed Web Applications, Second Edition, and HackNotes™ Web Security Portable Reference for more details. OWASP has also launched a project to provide additional guidance on satisfying Web application security requirements for PCI (see “OWASP PCI” online).
PCI DSS goes into great level of details here, covering common types of coding-related weaknesses in Web applications. Those are as follows:
■ 6.5.1: Cross-site scripting (XSS)
■ 6.5.2: Injection flaws, particularly SQL injection; also consider LDAP and Xpath injection flaws and other injection flaws
■ 6.5.3: Malicious file execution
■ 6.5.4: Insecure direct object references
■ 6.5.5: Cross-site request forgery (CSRF)
■ 6.5.6: Information leakage and improper error handling
■ 6.5.7: Broken authentication and session management
■ 6.5.8: Insecure cryptographic storage
■ 6.5.9: Insecure communications
■ 6.5.10: Failure to restrict URL access
In addition to secure coding to prevent vulnerabilities, organizations might need to take care of the existing deployed applications by looking into Web application firewalls and Web application scanning (WAS). An interesting part of Requirement 6.6 is that PCI DSS recommends either a vulnerability scan followed by a remediation (“Reviewing public-facing Web applications via manual or automated application vulnerability security assessment tools”) or a Web application firewall (“Installing a Web application firewall in front of public-facing Web applications”), completely ignoring the principal of layered defense or defense-in-depth. In reality, deploying both is highly recommended for effective protection of Web applications.

WAS

Before progressing with the discussion of WAS, we need to remind our readers that cross-site scripting and SQL injection Web site vulnerabilities account for a massive percentage of card data loss. These same types of Web vulnerabilities are also very frequently discovered during PCI DSS scans. For example, Qualys vulnerability research indicates that cross-site scripting is one of the most commonly discovered vulnerabilities seen during PCI scanning; it was also specifically called out by name in one PCI Council document [4] as a vulnerability that leads to PCI validation failure.
You need to ensure that whichever solution you use covers the current OWASP Top Ten list. This list may change over time and you need to ensure your Web application security scanner (WAS) you are using keeps up with the changes. Some WAS products will need to add or modify detections to continue to meet this requirement. If you are using a more full-featured WAS, you may need to modify the scan options from time-to-time as the Top Ten list changes.
There are many commercial and even some free WAS solutions available. Common examples of free or open-source tools are as follows:
■ Nessus, free vulnerability scanner, now has some detection of Web application security issues, see www.nessus.org.
■ WebScarab, direct from OWASP Project (see www.owasp.org/index.php/Category:OWASP_WebScarab_Project) is also a must for your assessment efforts. The new one, WebScrab NG, is being created as well (see www.owasp.org/index.php/OWASP_WebScarab_NG_Project).
■ w3af, is a Web Application Attack and Audit Framework (see http://w3af.sourceforge.net/), which can be used as well.
■ Wikto, even if a bit dated, is still useful (see www.sensepost.com/research/wikto/).
■ Ratproxy is not a scanner, but a passive discovery and assessment tool (see http://code.google.com/p/ratproxy/).
■ Another classic passive assessment tool is Paros proxy (http://sourceforge.net/projects/paros).
■ The whole CD of Web testing tools Samurai Web Testing Framework from InGuardians can be found at http://samurai.inguardians.com/
Commercial tools' vendors include Qualys, IBM, HP, and others. Apart from procuring the above tools and starting to use them on your public Web applications, it is worthwhile to learn a few things about their effective use. We will present those below and highlight their use for card data security.
First, if you are familiar with ASV network scanning (covered below in the section on Requirement 11), you need to know that Web application security scanning is often more intrusive than network-level vulnerability scanning that is performed by an ASV for external scan validation. For example, testing for SQL injection or cross-site scripting often requires actual attempts to perform an injection of SQL code into a database or a script into a Web site. Doing so may cause some databases/applications to hang or to have spurious entries to appear in your Web application.
Just as with network scanning, application scanners may need to perform authentication to get access to more of your application. Many flaws that allow a regular user to become an application administrator can only be discovered via a trusted scan. If the main page on your Web application has a login form, you will need to perform a trusted scan, i.e., one that logs in. In addition to finding flaws where a user can become an admin, you also need to ensure that customers cannot intentionally, or inadvertently, traverse within the Web application to see other customers' data. This happened many times in Web applications. See Fig. 8.3, which shows an example of such vulnerability.
B9781597494991000131/f08-03-9781597494991.jpg is missing
Figure 8.3
User Privilege Violation Vulnerability in NVD
Also, depending on how your Web site handles authentication, you may need to log in manually first with the account you will use for scanning, grab the cookie by using a tool like Paros or WebScarab, and then load it into the scanner. Depending on time-outs it takes, you may need to perform this activity just before the scan. In this case, do not plan on being able to schedule scans and have them run automatically.
In addition, WAS requires a more detailed knowledge of software vulnerabilities and attack methodologies to allow for the correct interpretation of results than network-based or “traditional” vulnerability scanning does. Remember to always do research in a laboratory environment, not connected to your corporate environment!

Warning
It is perfectly reasonable to use an advanced Web application security scanner to scan applications deployed in production environment, but only after you tried it more than a few times in the laboratory.
When Web farms and Web portals are in the mix, scoping can become somewhat cloudy. Sometimes, you may have a Web portal that will send all transactions involving the transmission or processing of credit-card data to different systems. It is likely that the entire cluster will be in-scope for PCI in this case.
Finally, WAS (mandated in Requirement 6.6) is not a substitute for a Web application penetration test (mandated in Requirement 11.3). Modern web application scanners can do a lot of poking and probing, but they cannot completely perform tasks performed by a human who is attacking a Web application. For example, fully automated discovery of cross-site requirement forgery flaws is not possible using automated scanners today.

Warning
Network vulnerability scanning (mandated in Requirement 11.2) and Web application security testing (mandated in Requirement 6.6) have nothing to do with each other. Please don't confuse them! Network vulnerability scanning is mostly about looking for security issues in operating systems and off-the-shelf applications, such as Microsoft Office or Apache Web server, while Web application security testing typically looks for security issues in custom and customized Web applications. Simply scanning your Web site with a network vulnerability scanner does not satisfy Requirement 6.6 at all.

Please Remember
Scanning your Web site with a network vulnerability scanner is not web application security assessment!
Let's go through a complete example of performing a PCI DSS Web application scan using Qualys as an example.

PCI Web Application Scan

Let's go through a complete scan from its initiation to report analysis.
First, we need to define the Web application we are planning to scan; see Fig. 8.4.
B9781597494991000131/f08-04-9781597494991.jpg is missing
Figure 8.4
Defining a New Web Application to Scan
The definition shown in Fig. 8.4 includes the server where the Web application is deployed, such as www.example.com or 10.10.10.10, as well as a starting Web address (URL), such as /blog or /payapp or even simply “/.” Then, defining the scan that will run; see Fig. 8.5.
B9781597494991000131/f08-05-9781597494991.jpg is missing
Figure 8.5
Defining a New Web Application Scan
This step includes choosing from a few options such as the desire to probe the Web application as an outside would (with no authentication) or to log in first as a user, and then look for security issues (with authentication). Additional common options that might be presented to you by a Web application scanner are the HTTP methods to try (GET, POST, etc.) and specific vulnerabilities to test for.
Next (Fig. 8.6), we observe the results when the scan completes.
B9781597494991000131/f08-06-9781597494991.jpg is missing
Figure 8.6
Observing the Results of the Scan
The Fig. 8.6 presents some general information about the Web application security scan we just ran; the vulnerability results follow below (Fig. 8.7).
B9781597494991000131/f08-07-9781597494991.jpg is missing
Figure 8.7
Web Application Security Scan Results: Vulnerability Description
Specifically, the above view shows the so-called blind SQL injection, which is a specialized type of SQL injection. For more information, refer to the corresponding OWASP Top 10 entry: www.owasp.org/index.php/Blind_SQL_Injection. Such an issue will enable an attacker to modify the syntax of a SQL query to retrieve, corrupt, or delete data. The typical causes of this vulnerability are lack of input validation and insecure construction of the SQL query. In other words, this vulnerability may enable the attacker to either retrieve or corrupt your database!
How do you fix it? The scanner tells you what you need to do. See Fig. 8.8 for what to do.
B9781597494991000131/f08-08-9781597494991.jpg is missing
Figure 8.8
Web Application Security Scan Results: How to Fix It?
SQL injection vulnerabilities can be addressed in three areas: input validation, query creation, and database security. All input received from the Web client should be validated for correct content. If a value's type or content range is known beforehand, then stricter filters should be applied. For example, an e-mail address should be in a specific format and only contain characters that make it a valid address; or numeric fields like a USA zip code should be limited to five digit values [5].
Again, just a reminder, whether network or application, the act of scanning is not sufficient as it is because it will only tell you about the issues but will not make you secure; you need to either fix the issue in code or deploy a Web application firewall to block possible exploitation of the issues.
That is what we are going to discuss next.

Warning
While talking about network or application scanning, you rarely if ever scan to “just know what is out there.” The work does not end when the scan completes; it only begins. The risk reduction from vulnerability scanning comes when you actually remediate the vulnerability or at least mitigate the possible loss. Even if scanning for PCI DSS compliance, your goal is ultimately risk reduction, which only comes when the scan results come back clean.

Web Application Firewalls

Let's briefly address the Web application firewalls. Before the discussion of this technology, remember that a network firewall deployed in front of a Web site does not a Web firewall make. Web application firewalls got their unfortunate name (that of a firewall) from the hands of marketers. These fine folks didn't consider the fact that a network firewall serves to block or allow network traffic based on network protocol and port as well as source and destination, whereas a Web application firewall has to analyze the application behavior before blocking or allowing the interaction of a browser with the Web application framework.
A Web application firewall, like a network intrusion detection system (IDS) or IPS, needs to be tuned to be effective. To tune it, run reports that give total counts per violation type. Use these reports to tune the web application firewall (WAF). Often, a few messages that you determine to be acceptable traffic for your environment and applications will clean up 80 percent of the clutter in the alert window. This sounds like an obvious statement, but you would be amazed how many people try to tune WAF technologies in blocking mode while causing the application availability issues in your environment.
If you have a development or quality assurance (QA) environment, placing a WAF (the same type you use in production) in front of one or more of these environments (even in just read/passive mode) can assist you, to some extent, in discovering flaws in Web applications. This then allows for a more planned code fix. Sometimes, you may need to deploy to production with blocking rules until the code can be remediated. In addition, place a WAF in a manner that will block all the direction from where the application attacks might come from (yes, including the dreaded insider attacks).
Finally, unlike the early versions, Web application firewalls are now actually usable and need to be used to protect the Web site from exploitation of the vulnerabilities you discover while scanning.

What to Do to Be Secure and Compliant?

Requirement 6 asks for more than a few simple things; you might need to invest time to learn about application security before you can bring your organization into compliance.
■ Read up on software security (pointers to OWASP, SANS, NIST, MITRE, BSIMM are given above).
■ In particular, read up on Web application security.
■ If you develop software internally or use other custom code, start building your software security program. Such a program must focus on both secure programming to secure the code written within your organization and on code review to secure custom code written by other people for you. No, it is not easy, and likely will take some time.
■ Invest in a Web application security scanner; both free open-source and quality commercial offerings that cover most of OWASP Top 10 (as mandated by PCI DSS) are available.
■ Also, possibly invest in Web application firewall to block the attacks against the issues discovered while scanning.

Requirement 11 Walk-Through

Let's walk through the entire Requirement 11 to see what is being asked. First, the requirement name itself asks users to “Regularly test security systems and processes,” which indicates that the focus of this requirement goes beyond just buffer overflows and format string vulnerabilities from the technical realm, but also includes process weaknesses and vulnerabilities. A simple example of a process weakness is using default passwords or easily guessable passwords (such as the infamous “password” password). The above process weaknesses can be checked from the technical side, for example during the network scan by a scanner that can do authenticated policy and configuration audits, such as password strength checks. However, another policy weakness, requiring overly complicated passwords and frequent changes, which in almost all cases lead to users writing the password on the infamous yellow sticky notes, cannot be “scanned for” and will only be revealed during an annual penetration test. Thus, technical controls can be automated, whereas most policy and awareness controls cannot be.
The requirement text goes into a brief description of vulnerabilities in a somewhat illogical manner: “Vulnerabilities are being discovered continually by hackers and researchers, and being introduced by new software.” Admittedly, vulnerabilities are being introduced first and then discovered by researchers (which are sometimes called “white hats”) and attackers (“black hats”).
The requirement then calls for frequent testing of software for vulnerabilities: “Systems, processes, and custom software should be tested frequently to ensure security is maintained over time and with any changes in software.” An interesting thing to notice in this section is that they explicitly call for testing of systems (such as operating systems software or embedded operating systems), processes (such as the password-management process examples referenced above), and custom software, but don't mention the commercial off-the-shelf (COTS) software applications. The reason for this is that it is included as part of the definition of “a system” because it is not only the operating system code, but vendor application code contains vulnerabilities. Today, most of the currently exploited vulnerabilities are found in applications and even in desktop applications such as MS Office, and, at the same time, there is a relative decreased weakness in core Windows system services.
The detailed requirement starts from Requirement 11.1, which mandates the organization to “test security controls, limitations, network connections, and restrictions annually to assure the ability to adequately identify and to stop any unauthorized access attempts.” This requirement is the one that calls for an in-depth annual security assessment. Note that this assessment of controls is not the same as either a vulnerability scan or a penetration test. Obviously, if your organization is already doing more rigorous security testing, there is no need to relax it down to PCI DSS standard once per year. Also notice the list of controls, limitations, network connections, and restrictions, which again covers technical and nontechnical issues. The term controls is broad enough to cover technical safeguards and policy measures.
In addition, wireless network testing states: “use a wireless analyzer at least quarterly to identify all wireless devices in use.” Indeed, the retail environment of 2007 makes heavy use of wireless networks in a few common cases where POS wireless network traffic was compromised by the attackers. Please refer to Chapter 7, “Using Wireless Networking,” for wireless guidance.
Further, Section 11.2 requires one to “run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, product upgrades).” Even though many grumble that “after any changes” is not stated clearly enough (after all, one would not scan the entire enterprise network after changing a single rule on a router somewhere deep in the test environment), this requirement does catch both needs to assess the vulnerability posture, periodically and after a change, to make sure that new vulnerabilities and weaknesses are not introduced.
This requirement has an interesting twist, however. Quarterly external vulnerability scans must be performed by a scan vendor qualified by the payment card industry. Thus, just using any scanner won't do; one needs to pick it from the list of ASVs, which we mentioned in Chapter 3, “Why Is PCI Here?” Specifically, the site says: “The PCI Security Standards Council has assumed responsibility for the Approved Scanning Vendor (ASV) program previously operated separately by MasterCard Worldwide.” At the same time, the requirements for scans performed after changes are more relaxed: “Scans conducted after network changes may be performed by the company's internal staff.” This is not surprising given that such changes occur much more frequently in most networks.
The next section covers the specifics of ASV scanning and the section after covers the internal scanning.

External Vulnerability Scanning with ASV

We will look into the operational issues of using an ASV, cover some tips about picking one, and then discuss what to expect from an ASV.

What Is an ASV?

As we mentioned in Chapter 3, “Why Is PCI Here?,” PCI DSS validation also includes network vulnerability scanning by an ASV. To become an ASV, companies must undergo a process similar to QSA qualification. The difference is that in the case of QSAs, the individual assessors attend classroom training on an annual basis, whereas ASVs submit a scan conducted against a test network perimeter. An organization can choose to become both QSA and ASV, which allows the merchants and service providers to select a single vendor for PCI compliance validation.
So, to remind, ASVs are security companies that help you satisfy one of the two third-party validation requirements in PCI. ASVs go through a rigorous laboratory test process to confirm that their scanning technology is sufficient for PCI validation.
ASV existence and operation is governed by PCI DSS Requirement 11.2, which states: “Quarterly external vulnerability scans must be performed by an Approved Scanning Vendor (ASV) qualified by Payment Card Industry Security Standards Council (PCI SSC).”
In addition, the particulars of becoming an ASV as well as the specifics of scanning that ASV must perform and other details of ASV operation are governed by two other documents:
1. Technical and Operational Requirements for Approved Scanning Vendors (ASVs) [6]
2. Validation Requirements for Approved Scanning Vendors (ASV) [7]
Also, it is worthwhile to mention that validation via an external ASV scan only applies to those merchants that are required to validate requirement 11. In particular, those who don't have to validate Requirement 11 are those that outsource payment processing, those who don't process any data on their premises and those with dial-up (non-Internet) terminals. This is important, so it bears repeating; if you have no system to scan because you don't process in-house, you don't have to scan. Of course, it goes without saying that deploying a vulnerability management system to reduce your information risk is appropriate even if PCI DSS didn't exist at all.

Considerations when Picking an ASV

First, your acquiring bank might have picked an ASV for you. In this case, you might or might not have to use its choice. Note, however, that such prepicked ASV might be neither the best nor the cheapest.
While looking at the whole list of ASVs and then picking the one that “sounds nice” is one way to pick, it is likely not the one that will ensure trouble-free PCI validation and increased card data security as well as reduced risk of data theft. At the time of this writing, the ASV list boasted more than 90 different companies, from small 1 to 2 persons consulting outlets to IBMs and VeriSigns of the world, located on all the continents (save Antarctica). How do you pick?
First, one strategy, that needs to be unearthed and explained right away, is as simple as it is harmful for your card data security and PCI DSS compliance status. Namely, organizations that blindly assume that “all ASVs are the same” based on the fact that all are certified by the PCI Council to satisfy PCI DSS scan validation requirements would sometimes just pick on price. This same assumption sometimes applies to QSAs, and as many security industry insiders have pointed out (including both authors), they all are not created equal!
As a result, passing the scan validation requirement and submitting the report that indicates “Pass” will definitely confirm your PCI validation (as long as your ASV remains in good standing with the Council). Sadly, it will not do nearly enough for your cardholder data security. Even if certified, ASVs coverage of vulnerabilities varies greatly; all of them do the mandatory minimum, but more than a few cut corners and stay at that minimum (which, by the way, they are perfectly allowed doing), while others help you uncover other holes and flaws that allow malicious hackers to get to that juicy card data.
Thus, your strategy might follow these steps.
First, realize that all ASVs are not created equal; at the very least, prices for their services will be different, which should give you a hint that the value they provide will also be different.
Second, realize that all ASVs roughly fall into two groups: those that do the minimum necessary according to the above guidance documents (focus on compliance) and those that intelligently interpret the standard and help you with your data security and not just with PCI DSS compliance (focus on security). Typically, the way to tell the two groups apart is to look at the price. In addition, nearly 60 percent of all currently registered ASVs use the scanning technology from Qualys (www.qualys.com), while many of the rest use Nessus (www.nessus.org) to perform PCI validation.
In addition, the pricing models for ASV services vary; they roughly fall into two groups: in one model, you can scan your systems many times (unlimited scanning) while the other focuses on providing you the mandatory quarterly scan (i.e., four a year). In the latter case, if your initial scan shows the vulnerabilities and need to fix and rescan to arrive at a passing scan, you will be paying extra. Overall, it is extremely unlikely that you can get away with only scanning your network from the outside four times a year.
Third, even though an ASV does not have to be used for internal scanning, it is more logical to pick the same scanning provider for external (must be done by an ASV) and internal (must be done by somebody skilled in using vulnerability management tools). Using the same technology provider will allow you to have the same familiar report format and the same presentation of vulnerability findings. Similarly, and perhaps more importantly, even though PCI DSS–ASV scanning does not allow for authenticated or trusted scanning, picking an ASV that can run authenticated scans on your internal network is useful since such scanning can be used to automate the checking for the presence of other DSS controls, such as password length, account security settings, use of encryption, availability of antimalware defenses, etc. For example, an authenticated scan of a Windows server can help.
Table 8.1 shows a sample list of PCI DSS controls that may possibly be performed using automated scanning tools that perform authenticated or trusted scanning.
Table 8.1 Automatic Validation of PCI DSS Controls
RequirementPCI DSS 1.2 RequirementTechnical Validation of PCI Requirements
1.4Install personal firewall software on any mobile and/or employee-owned computers with direct connectivity to the Internet (for example, laptops used by employees), which are used to access the organization's network.Automated tools are able to check for the presence of personal firewalls deployed on servers, desktops, and laptops remotely.
2.1Always change vendor-supplied defaults before installing a system on the network – for example, include passwords, simple network management protocol (SNMP) community strings, and elimination of unnecessary accounts.Automated tools can be used to verify that vendor defaults are not used by checking for default and system accounts on servers, desktops, and network devices.
2.1.1For wireless environments connected to the CDE or transmitting cardholder data, change wireless vendor defaults, including but not limited to default wireless encryption keys, passwords, and SNMP community strings. Ensure wireless device security settings are enabled for strong encryption technology for authentication and transmission.Automated tools can be used to verify that default settings and default passwords are not used across wireless devices connected to the wired network.
2.2Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.Automated tools can validate the compliance of deployed systems to configuration standards, mandated by the PCI DSS.
2.2.2Disable all unnecessary and insecure services and protocols (services and protocols not directly needed to perform the device's specified function).Automated tools can help discover systems on the network as well as detect the network-exposed services that are running on systems, and thus significantly reduce the effort needed to bring the environment in compliance.
2.2.4Remove all unnecessary functionality, such as scripts, drivers, features, subsystems, file systems, and unnecessary Web servers.Automated tools can help discover some of the insecure and unnecessary functionality exposed to the network, and thus significantly reduce the effort needed to bring the environment in compliance.
2.3Encrypt all nonconsole administrative access. Use technologies such as Secure Shell (SSH), virtual private network (VPN), or Secure Sockets Layer/Transport Layer Security (SSL/TLS) for Web-based management and other nonconsole administrative access.Automated tools can help validate that encrypted protocols are in use across the systems and that unencrypted communication is not enabled on servers and workstations (SSH, not Telnet; SSL, not unencrypted HTTP, etc).
3.4Render PAN, at minimum, unreadable anywhere it is stored (including on portable digital media, backup media, in logs).Automated tools can confirm that encryption is in use across the PCI in-scope systems by checking system configuration settings relevant to encryption.
3.5Protect cryptographic keys used for encryption of cardholder data against both disclosure and misuse.Automated tools can be used to validate security settings relevant to protection of system encryption keys.
4.1Use strong cryptography and security protocols such as SSL/TLS or Internet Protocol Security (IPsec) to safeguard sensitive cardholder data during transmission over open, public networks.Automated tools can be used to validate the use of strong cryptographic protocols by checking relevant system configuration settings and detect instances of insecure cipher use across the in-scope systems.
4.1.1Ensure wireless networks transmitting cardholder data or connected to the CDE, use industry best practices (for example, IEEE 802.11i) to implement strong encryption for authentication and transmission.Automated tools can attempt to detect wireless access points from the network side and to validate the use of proper encryption across those access points.
5.1Deploy antivirus software on all systems commonly affected by malicious software (particularly, personal computers and servers).Automated tools can validate whether antivirus software is installed on in-scope systems.
5.2Ensure that all antivirus mechanisms are new, actively running, and capable of generating audit logs.Automated tools can be used to check for running status of antivirus tools.
6.1Ensure that all system components and software have the latest vendor-supplied security patches installed. Install critical security patches within one month of release.Automated tools can be used to detect missing OS, application patches, and security updates.
6.2Establish a process to identify newly discovered security vulnerabilities (for example, subscribe to alert services freely available on the Internet).Automated tools are constantly updated with new vulnerability information and can be used in tracking newly discovered vulnerabilities.
6.6For public-facing Web applications, address new threats and vulnerabilities on an ongoing basis and ensure these applications are protected against known attacks by either of the following methods: reviewing public-facing Web applications via manual or automated application vulnerability security assessment tools or methods, at least annually and after any changes.Automated tools can be used to assess Web application security in support of PCI Requirement 6.6.
7.1Limit access to system components and cardholder data to only those individuals whose job requires such access.Automated tools can analyze database user right and permissions, looking for broad and insecure permissions.
8.1Assign all users a unique ID before allowing them to access system components or cardholder data.In partial support of this requirement, automated tools are used to look for active default, generic accounts (root, system, etc.), which indicate that account sharing takes place.
8.2In addition to assigning a unique ID, use at least one of the following methods to authenticate all users: Password or passphrase.Automated tools can be used to look for user accounts with improper authentication settings, such as accounts with no passwords or with blank passwords.
8.4Render all passwords unreadable during transmission and storage on all system components using strong cryptography.Automated tools can be used to detect system configuration settings, permitting unencrypted and inadequately encrypted passwords across systems.
8.5Ensure proper user authentication and password management for nonconsumer users and administrators on all system components.Automated tools can be used to validate an extensive set of user account security settings and password security parameters across systems in support of this PCI requirement.
11.1Test for the presence of wireless access points by using a wireless analyzer at least quarterly or deploying a wireless IDS/IPS to identify all wireless devices in use.Automated tools can attempt to detect wireless access points from the network side, thus to help the detection of rogue access points.
11.2Run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, and product upgrades).Automated tools can be used to scan for vulnerabilities both from inside and from outside the network.
Fourth, look for how ASV workflow matches your experience and expectation. Are there many manual tasks required to perform a vulnerability scan and create a report or is everything automated? Fully automated ASV services where launching a scan and presenting a compliance report to your acquirer can be done from the same interface are available. Still, if you need help with fixing the issues before you can rescan and validate your compliance, hiring an ASV that offers help with remediation is advisable. It goes without saying that picking an ASV that requires you to purchase any hardware or software is not advisable; all external scan requirements can be satisfied by scanning from the Internet.
Finally, even though this strategy focuses on picking an ASV, you and your organization have a role to play as well, namely, in fixing the vulnerabilities that the scan discovered to arrive at a compliant status – clean scan with no failures. We discuss the criteria that ASVs use for pass/fail below.

How ASV Scanning Works

ASVs use standard vulnerability scanning technology to detect vulnerabilities that are deemed by the PCI Council to be relevant for PCI DSS compliance. This information will help you understand what exactly you are dealing with when you retain the scanning services of an ASV. It will also help you learn how to pass or fail the PCI scan criteria and how to prepare your environment for an ASV scan.
Specifically, the ASV procedures mandate that ASV covers the following in its scan (the list below is heavily abridged, please refer to “Technical and Operational Requirements for Approved Scanning Vendors (ASVs)”[4] for more details):
■ Identify issues in “operating systems, Web servers, Web application servers, common Web scripts, database servers, mail servers, firewalls, routers, wireless access points, common (such as DNS, FTP, SNMP, etc.) services, custom Web applications.”
■ “ASV must use the CVSS base score for the severity level” (please see the main site for CVSS at www.first.org/cvss for more information).
After the above resources are scanned, the following criteria are used to pass/fail the PCI validation (also [4]):
■ “Generally, to be considered compliant, a component must not contain any vulnerability that has been assigned a CVSS base score equal to or higher than 4.0.” Any curious reader can look up a CVSS score for many publicly disclosed vulnerabilities by going to NVD at http://nvd.nist.gov.
■ “If a CVSS base score is not available for a given vulnerability identified in the component, then the compliance criteria to be used by the ASV depend on the identified vulnerability leading to a data compromise.” This criterion makes sure that ASV security personnel can use their own internal scoring methodology when CVSS scores cannot be produced.
There are additional exceptions to the above rules. Some vulnerability types are included in pass/fail criteria, no matter what their scores are, while others are excluded. Here are the inclusions:
■ “A component must be considered noncompliant if the installed SSL version is limited to Version 2.0, or older.”
■ “The presence of application vulnerabilities on a component that may lead to SQL injection attacks and cross-site scripting flaws must result in a noncompliant status”[4].
The exclusions are as follows:
■ “Vulnerabilities or misconfigurations that may lead to DoS should not be taken into consideration”[4].
The above criteria highlight the fact that PCI DSS external scanning is not everything you need to do for security. After all, PCI DSS focuses on card data loss, not on the availability of your key IT resources for your organization and not on their resistance to malicious hackers.
Each ASV will interpret the requirements a little differently; Table 8.2 shows an example from Qualys.
Table 8.2 QualysGuard PCI Pass/Fail Status Criteria
Vulnerabilities with a NIST CVSS v2.0 base score of either 4.0 or higher will cause PCI compliance to fail on the scanned IPs.
Vulnerabilities that do not have a NIST CVSS score, or have a NIST CVSS score of 0, will be rated using the Qualys-Guard severity ranking. A severity of three or above will cause PCI compliance to fail on the corresponding IP.
An IP will be considered noncompliant if the SSL version installed on it is limited to 2.0 or older.
Vulnerabilities that may lead to SQL injection attacks and cross-site scripting will result in a noncompliant status on the corresponding IP.
Vulnerabilities or misconfigurations that may lead to denial of service are not taken into consideration for PCI compliance.
The PCI Technical Report will include a list of all vulnerabilities discovered; however, the PCI vulnerabilities that drive the pass/fail criteria will be indicated as such.
However, quality ASV identifies many different types of vulnerabilities in addition to PCI DSS. For example, Qualys uses the following scan policy to run its PCI DSS scanning, as shown in Fig. 8.9.
B9781597494991000131/f08-09-9781597494991.jpg is missing
Figure 8.9
PCI DSS ASV Scan Options
As you can see, it scans for all possible vulnerabilities, not just for PCI relevant ones, which allows you to reduce the risk of data exposure, not just achieve PCI DSS compliance validation.
When the scan completes, the report is generated, which can then be used to substantiate your PCI validation via vulnerability scanning.

PCI DSS Scan Validation Walk-through

Let's analyze a complete report and learn how to go from its current status (PCI FAILED) to successful PCI validation (PCI PASSED). Your PCI scan validation endeavor starts from a view presented by your ASV, which is similar to this (Fig. 8.10).
B9781597494991000131/f08-10-9781597494991.jpg is missing
Figure 8.10
PCI DSS Scan Failure
The scan report shown in Fig. 8.10 indicates that several of your scanned systems have failed the scan for various reasons. Let's walk through one of the systems, which failed the scan. For example, vulnerability such as the following will be grounds for a failed scan validation (Fig. 8.11).
B9781597494991000131/f08-11-9781597494991.jpg is missing
Figure 8.11
PCI DSS Scan Failure Vulnerability
This particular vulnerability has a CVSS score of 4.3 (which is more than 4.0 needed for PCI criteria to fail); thus, the PCI validation for this machine and, consequently, for the entire scanned environment fails. As a side note, in addition to being severe, this particular vulnerability can enable data theft via phishing because it enabled the attacker to run the cache poisoning attack.
How do we get out of this conundrum and back to security and PCI DSS scan passing? We need to fix the issue. Specifically, upgrade your Domain Name System (DNS) server software (such as BIND in this case) to a version that does not have these issues, such as the one newer than 9.2.8. To do that, see the BIND Web site at www.isc.org/products/BIND/ for patches and updates or contact your OS vendor for the same.
As a result of those efforts, you will be looking at a very different picture (Fig. 8.12).
B9781597494991000131/f08-12-9781597494991.jpg is missing
Figure 8.12
PCI DSS Scan Pass
To summarize, ASV quality scanning will detect all possible external vulnerabilities and highlight those that are reasons for PCI DSS validation failure. The same process needs to be repeated for quarterly scans – usually toward the end of the quarter but not during the last day because remediation activities needs to happen before a final rescan takes place. In fact, let's talk about operationalizing the ASV scanning.

Operationalizing ASV Scanning

To recap PCI DSS Requirement 11.2 calls for quarterly scanning. In addition, every scan may lead to remediation activities, and those aren't limited to patching. Moreover, validation procedures mention that a QSA will ask for four passing reports during an assessment.
The above calls for an operation process for dealing with this requirement. Let's build this process together now.
First, it is a very good idea to scan monthly or even weekly if possible. Why would you be doing it to satisfy a quarterly scanning requirement? Well, consider the following scenario: on the last day of March, you perform an external vulnerability scan and you discover a critical vulnerability. The discovered vulnerability is present on 20 percent of your systems, which totals to 200 systems. Now, you have exactly 1 day to fix the vulnerability on all systems and perform a passing vulnerability scan, which will be retained for your records. Is this realistic? The scenario can happen and, in fact, has happened in many companies that postpone their quarterly vulnerability scan until the very last day and did not perform any ongoing vulnerability scanning. Considering the fact that many acquiring institutions are becoming more stringent with PCI validation requirements, and will not grant you an exception. Beyond the first day over the next month, the scenario will certainly incur unnecessary pain and suffering on your company and your IT staff. What is the way to avoid it? Performing external scans every month or even every week. It is also a good idea to perform an external scan after you apply a patch to external systems.

Note
Most companies run their external scans monthly, even though those are called “quarterly scans.” That way, issues can be resolved in time to have a clean quarterly report since there are no surprises. There are known cases where organizations have been burned by waiting until the last month of a quarter to run an external scan. This can cause a serious amount of last-minute, emergency, code and system configuration changes, and an overall sense of panic, which is not conducive to good security management.
After you run the scan, carefully review the results of the reports. Are those passing or failing reports? If the report indicates that you do not pass the PCI validation requirement, please note which systems and vulnerabilities do not pass the criteria. Next, distribute the report to those in your IT organization who is responsible for the systems that fail the test. Offer them some guidance on how to fix the vulnerabilities and bring those systems back to PCI compliance. These intermediate reports will absolutely not be shared with your acquiring institutions.
When you receive the indication that those vulnerabilities have been successfully fixed, please rescan to obtain a clean report. Repeat the process every month or week.
Finally, scan a final round before the end of the quarter and preserve the reports for the assessor. Thus, your shields will be up at all times. If you are only checking them four times a year, you're suffering from two problems. First, you are most likely not PCI compliant throughout most of the year. Second, you burden yourself with a massive emergency effort right at the end of the quarter when other people at your organization expect IT systems to operate at its peak. Don't be the one telling finance that they cannot run that quarterly report!

What Do You Expect from an ASV?

Discussing the expectations while dealing with an ASV and working toward PCI DSS scan validation is a valuable exercise. The critical considerations are described below.
First, ASV can scan you and present the data (report) to you. It is your job to then bring the environment in compliance. After that, ASV can again be used to validate your compliant status and produce a clean report. Remember, ASV scanning does not make you compliant, you do, by making sure that no PCI-fail vulnerabilities are present in your network.
Second, you don't have to hire expensive consultants just to run an ASV scan for you every quarter. Some ASVs will automatically perform the scan with the correct settings and parameters, without you learning the esoteric nature of a particular vulnerability scanner. In fact, you can sometimes even pay for it online and get the scan right away – yes, you guessed right – using a credit card. 1
1How do you think the companies that provide PCI DSS and security services process their card payments? They outsource it, of course, no data – no risk. We will likely repeat this advice many more times in this book.
Third, you should expect that a quality ASV will discover more vulnerabilities than is required for PCI DSS compliance. You'd need to make your own judgment call on whether to fix them. One common case where you might want to address the issue is vulnerabilities that allow hackers to crash your systems (denial of service [DoS] vulnerabilities). Such flaws are out of scope for PCI because they cannot directly cause the theft of card data; however, by not fixing them, you are allowing the attackers to disrupt your online business operation.
Finally, let us offer some common tips on ASV scanning.
First comes the question: what system must you scan for PCI DSS compliance? The answer to this splits into two parts, for external and internal scanning.
Specifically, for external systems or those visible from the Internet, the guidance from the PCI Council is clear: “all Internet-facing Internet Protocol (IP) addresses and/or ranges, including all network components and devices that are involved in e-commerce transactions or retail transactions that use IP to transmit data over the Internet” (source: “Technical and Operational Requirements for Approved Scanning Vendors (ASVs)”[4] by PCI Council). The obvious answer can be “none” if your business has no connection to the Internet.
For internal systems, the answer covers all systems that are considered in-scope for PCI, which is either those involved with card processing or directly connected to them. The answer can also be “none” if you have no systems inside your perimeter, which are in-scope for PCI DSS.
Second, the question about pass/fail criteria for internal scans often arises as well. While the external ASV scans have clear criteria discussed above, internal scans from within your network have no set pass/fail criteria. The decision is thus based on your idea of risk; there is nothing wrong in using the same criteria as above, of course, if you think that it matches your view of risks to card data in your environment.
Another common question is how to pass the PCI DSS scan validation? Just as above, the answer is very clear for external scans: you satisfy the above criteria. If you don't, you need to fix the vulnerabilities that are causing you to fail and the rescan. Then, you pass and get a “passing report,” that you can submit to your acquiring bank.
For internal scans, the pass/fail criteria are not expressly written in the PCI Council documentation. Expect your assessor to ask for clean internal and external scans as part of Requirement 11.2. Typically, QSAs will define “clean” internal scans as those not having high-severity vulnerability across the in-scope systems. For a very common scale of vulnerability ranking from 1 to 5 (with 5 being the most severe), vulnerability of severities 3 to 5 are usually not acceptable. If CVSS scoring is used, 4.0 becomes the cutoff point; vulnerabilities with severities above 4.0 are not accepted unless compensating controls are present and are accepted by the QSA. It was reported that in some situations, a QSA would accept a workable plan that aims to remove these vulnerabilities from the internal environment.
Also, people often ask whether they become “PCI compliant” if they get a passing scan for their external systems. The answer is certainly a “no.” You only satisfied one of the PCI DSS requirements; namely, PCI DSS validation via an external ASV scan. This is not the end. Likely, this is the beginning.

Internal Vulnerability Scanning

As we mentioned in the section, “Vulnerability Management in PCI,” internal vulnerability scanning must be performed every quarter and after every material system change, and no high-severity vulnerability must be present in the final scan preserved for presentation to your QSA. Internal scanning is governed by the PCI Council document called Security Scanning Procedures [8].
First, using the same template your ASV uses for external scanning is a really good idea, but you can use more reliable trusted or authenticated scanning, which will reveal key application security issues on your in-scope systems that regular, unauthenticated scanning may sometimes miss.
Remediation may take the form of hardening, patching, architecture changes, technology/tool implementations, or a combination thereof. Remediation efforts after internal scanning are prioritized based on risk and can be managed better than external ASV scans. Something to keep in mind for PCI environments is that the remediation of critical vulnerabilities on all in-scope systems is mandatory. This makes all critical and high-severity vulnerabilities found on in-scope PCI systems a high priority. Follow the same process we covered in the “operationalizing ASV scanning” section and work toward removing the high-severity vulnerabilities from the environment before presenting the clean report to the QSA.
Reports that show the finding and remediation of vulnerabilities for in-scope systems over time become artifacts that are needed to satisfy assessment requirements. You should consider that having a place to keep archives of all internal and external scan reports (summary, detailed, and remediation) for a 12-month period is a good idea. Your ASV may offer to keep them for you and also as an added service. However, it is ultimately your responsibility.
This is a continuous process. As with other PCI compliance efforts, it is important to realize that PCI compliance is an effort that takes place 24/7, 365 days a year.
For Internal scanning, you can create different reports for technicians who will fix issues found and summary reports for management. However, overdoing it is bad as well: handing a 10,000-page report to a technician will typically not result in remediation taking place. We are not even talking about a possibility of showing such a report to senior management. Working with the team responsible for remediation to ensure the reports give them actionable data, without overwhelming them, is very much worth the time spent.
Servers that are in-scope are usually scanned off-hours. Be sure your scan windows do not occur during maintenance windows or the target hosts may be off-line for maintenance. If you have workstations in-scope, scans may need to be run during business hours. For systems that must be scanned during business hours, you may need to make the scans run at a lower intensity.
Until you have thoroughly defined processes (documentation again – many efforts in PCI DSS require both “doing” and “recording”) for all scanning, remediation, and reporting functions tied to your PCI needs, you do not truly “own” the tool you are using.
Finally, issues will undoubtedly occur as you begin your scanning efforts. Here, having a well-defined root cause analysis helps a lot. The sidebar covers how to handle such issues.

Tools
Here is a sample PCI DSS scan issue tracking process in four steps:
Step 1: Gather inputs from the issue.
Gather Host/Application information: application name, version, patch level, port usage information, etc.
■ Was the application disrupted, a system service, or the entire operating system?
■ What had to be done to recover from the outage? Service restart or host reboot?
Step 2: Verify that the issue was caused by the scan.
Check system logs and try to match the time of the incident to the time of the scan.
Step 3: Place a support call with the application vendor or development team.
Verify that all patches have been applied to the application for “denial of service” and “buffer overflow” problems.
Step 4: If the issue is not resolved by the application vendor, engage support from your scanning vendor.
*Thanks to Derek Milroy for providing the sample process.
Let's also address the issues of a system change. PCI DSS Requirement 11.2 gives the following example of system changes:
■ New system component installations, which covers new systems, new system components, and new applications.
■ Changes in network topology, such as new network paths between the in-scope systems and the outside world, especially the Internet.
■ Firewall rule modifications, especially additional rules allowing traffic to or from the cardholder environment.
■ Product upgrades, examples are changes to payment applications, servers, network devices, etc.
It is your responsibility to perform a scan after these events have taken place.
Finally, remember that internal scanning is as mandatory for in-scope systems as the ASV scanning is mandatory for external systems.

Penetration Testing

Requirement 11.3 covers penetration-testing requirements. It says: “Perform penetration testing at least once a year and after any significant infrastructure or application upgrade or modification (such as an operating system upgrade, a subnetwork added to the environment, or a Web server added to the environment).” The logic here is again similar: periodic (annual) and after major changes. It appears that in this case, the changes that trigger a penetration test should be of a much larger scale because penetration test services aren't exactly cheap.
By the way, multiple books have been written on the art and science of the penetration test. There is no chance to cover it in our book. However, it makes sense to remind people that a penetration test will always involve a skilled, human attacker, not an automated tool.
Every penetration test begins with one concept – communication. A penetration test should be viewed by a security team as a hostile act – provided they are not asleep at the wheel. After all, the point is to break through active and passive defenses erected around an information system. Communication is important because somebody is about to break your security. During the time of the penetration test, alarm bells should ring, processes would be put into motion, and, if communication has not occurred, and appropriate permissions to perform these tests have not been obtained, law enforcement authorities may be contacted to investigate. Now wouldn't that be an embarrassment if your PCI-driven penetration test, planned for months, had not been approved by your chief information officer (CIO)?
Moreover, PCI DSS dives deeper into penetration testing details. These penetration tests must include the following:
■ 11.3.1: Network-layer penetration tests
■ 11.3.2: Application-layer penetration tests
Indeed, limiting to network layer tests is shortsighted, but this list still leaves a gap of nontechnical penetration testing via social engineering. Admittedly, most skilled penetration testing teams will perform such nontechnical testing as well, but not mentioning it explicitly in PCI official documents seems like a minor oversight.

Common PCI Vulnerability Management Mistakes

It is worthwhile to point out a few common mistakes that organizations make while working toward satisfying the vulnerability management requirements.
We hinted at the first mistake when we described the password example. It is in focusing only on the technical assessment means (which are indeed easier and more automatic) and omitting the process-based mistakes and issues. In particular, for PCI DSS, it applies to testing only the technology controls but not checking for policy controls such as security awareness, presence of plans and procedures, etc. Thus, people often focus on the technical vulnerabilities and forget all the human vulnerabilities, such as susceptibility of many enterprise IT users to social engineering, and other lapses of corporate controls. The way to avoid this mistake is to keep in mind that even though you use a scanning vendor, your credit-card data might still be pilfered, and addressing the “softer” part of security is just as critical.
Another commonly “lost and forgotten” thing is application-level vulnerabilities, which is not only about open ports and buffer overflows in network-exposed server code. It is also about all the Web applications – from a now-common cross-site scripting and SQL injection flaws to cross-site request forgery to more esoteric flaws in Flash code and other browser-side languages.
Similarly, client-side applications including all the recent Internet Explorer versions (and, frequently, Firefox versions as well), MS Office, and Adobe weaknesses lead to many a government agency falling victim to malicious hackers. What is in common with those “newer” vulnerabilities? Scanning for them is not as easy to automate as finding open Telnet ports and overflows in Internet Information Services (IIS), which was the staple of vulnerability scanning in the late 1990s and early 2000s. PCI requirements refer to such weaknesses but, still, more attention seems to be paid to the network-level stuff exposed to the Internet. The way to avoid this mistake is to keep in mind that a lot of hacking happens on the application layer and to use internal authenticated scanning to look for such issues inside your in-scope network. Such scanning does not have to be performed by the ASV, but if you follow our guidance above, you hopefully picked an ASV that offers internal and authenticated scanning and not just external, mandatory ASV scanning.
Recent Qualys research into Laws of Vulnerabilities [9] shows that attention is not paid to client-side issues. If you limit the scope of analysis to core OS vulnerabilities, the half-life drops to 15 days (which means that people patch those quickly!). On the other hand, if you limit it to Adobe and MS Office flaws, the half-life sharply rises to 60 days (which means people just don't care – and the current dramatic compromise rampage will continue). The data that supports that situation is shown in Figs 8.13 and 8.14.
B9781597494991000131/f08-13-9781597494991.jpg is missing
Figure 8.13
Half-Life of Core Operating System Vulnerability. Source: Qualys Laws of Vulnerabilities research
B9781597494991000131/f08-14-9781597494991.jpg is missing
Figure 8.14
Half-Life of Client Application Vulnerability. Source: Qualys Laws of Vulnerabilities research
Even when application-layer vulnerabilities are not forgotten, and patching and other remediation are happening on an aggressive schedule (nowadays, patching all servers within a “single day” time frame is considered aggressive, that is, what is done by “security leaders”), there is something else to be missed: vulnerability in the applications that were written in-house. Indeed, no vulnerability scanner vendor will have knowledge of your custom-written systems, and even if your penetration-testing consultant or an internal “red team” will be able to discover some of them during an annual penetration test, a lot of application code can be written in a year (and thus a lot more vulnerability introduced). The way to avoid this mistake is to train your software engineering staff to use secure programming practices to minimize the occurrence of such flaws, as we discussed in a previous section on Requirement 6 (the detailed coverage of it goes well beyond the scope of this book). While having a good application tester on staff is unlikely, assessing the security of the homegrown application needs to be undertaken more frequently than once a year. Obviously, initial focus on Web-based and Internet-exposed applications is a must.
The last mistake we mention is misjudging the list of in-scope systems or “scoping errors.” Indeed, modern, large-scale, payment processing systems are complicated and have many dependencies. Avoiding this mistake is not easy: the only way to find all the systems that might need to be scanned and protected is to have your internal staff (who know the systems best) work with an external PCI consultant or QSA (who knows the regulation best) to find out what should be in scope for your particular environment. Primarily, avoid these mistakes by knowing and being able to describe all the business processes that touch the card data. This will take care of the known, authorized locations of card data (which is very important!) and can give you more ideas on scope reduction and reducing card data storage and processing. In addition, even though data discovery technologies are not mandated by PCI, it is advisable to use them to discover other locations of card data, which are not authorized and are not known to be a part of legitimate business process. The latter can be either eliminated or documented and added to PCI DSS scope: these are the only two choices, and “ignored” is not one of them.
Keeping these mistakes in mind has the chance of making your PCI compliance experience a lot less painful.

Case Study

The case studies below illustrate how vulnerability management for PCI is implemented in a few real-world organizations.

PCI at a Retail Chain

This case study covers how PCI Requirement 11 was dealt with at a large retain chain in the US Midwest. The Unnamed Retailer Inc did not perform any periodic network vulnerability scanning and didn't use the services of a penetration-testing firm, which put them in a clear violation of PCI DSS rules. Their IT security staff sometimes used the freeware tools to scan a specific system for open ports or sometimes for vulnerabilities, but all such efforts were ad hoc and not tied to any program.
Upon the approach of PCI DSS compliance deadline, the company had to start the scanning using the PCI-ASV every quarter. They chose to deploy a service-based vulnerability scanning from a major vendor. The choice of vendor was determined after a brief proof-of-concept study.
Initially, they suffered from having no information or no knowledge of their vulnerability posture to having too much since they decided to scan all the Internet-facing systems. Later, however, they reduced the scope to what they considered to be “in-scope” systems, such as those processing payments (few of those systems are ever visible from the internet, however) and those connected to such systems.
Later, their scanning vendor introduced a method to scan the internal systems, which was immediately used by the retailer. However, it turned out that finding the internal systems that are in-scope is even more complicated since many systems have legitimate reasons to connect to those that process credit-card transactions. For example, even their internal patch management system was deemed to be in-scope since it frequently connected to the transaction processing servers.
As a result, their route to PCI vulnerability management nirvana took a few months following a phased approach. Implementation followed the these routes:
1. All Internet-facing systems that can be scanned
2. A smaller set of Internet-facing systems that were deemed to be “in-scope”
3. A set of internal systems that either process payments or connect to those that do
4. From there, the company will probably move to scanning select important systems that are not connected to payment processing, but are still critical in its business.
Even though the organization chose not to implement the intrusion detection earlier, their QSA strongly suggested that they look at some options in this area. The company chose to upgrade their firewalls to Unified Threat Management (UTM) devices that combined the capabilities of a firewall and a network IPS. An external consultant suggested their initial intrusion prevention rule set, which the company deployed.
Overall, the project ended up with a successful, if longish, implementation of PCI Requirement 11 using a scanning service as well as UTM devices in place of their firewalls. The organization did pass the PCI assessment, even though they were told to also look at deploying a file integrity monitoring software, which is offered by a few commercial vendors.

PCI at an E-Commerce Site

This case study is based on a major e-commerce implementation of a commercial scanning service, a penetration testing by a security consultancy, and a host IPS and file integrity monitoring on critical servers.
Upon encountering PCI compliance requirements, Buy.Web Inc. has assessed their current security efforts, which include the use of host IPS on their demilitarized zone (DMZ) servers as well as periodic vulnerability scanning. They realized that they needed to additionally satisfy the penetration-testing requirements and file integrity-checking requirements to be truly compliant. Their IT staff performed an extensive research of file integrity monitoring vendors, and chose one with the most advanced centralized management system (to ease the management of all the integrity-checking results). They also contracted a small IT security consultancy to perform the penetration testing for them.
In addition, the team used its previously acquired log-management solution to aggregate the host IPS and file integrity checking, to create a single-data presentation and reporting interface for their PCI assessors. Overall, this project was a successful illustration of a mature security program that needed to only “fill the gaps” to be PCI compliant.

Summary

To conclude, the PCI DSS document covers a lot of activities related to software vulnerabilities. Let us summarize what areas are covered since such requirements are spread over multiple requirements, even belonging to multiple sections. Table 8.3 covers the vulnerability management activities that we covered in this chapter.
Table 8.3 Vulnerability Management Activities in PCI DSS
Vulnerability-Related Activity Prescribed by PCI DSSRequirement
Secure coding guidance in regular and Web applications6
Secure software deployment6
Code review for vulnerabilities6
Vulnerability scanning11
Patching and remediation6
Technologies that protect from vulnerability exploitation5, 6, and 11
Site assessment and penetration testing11
As a result, PCI allows for a fairly comprehensive, if a bit jumbled, look at the entire vulnerability landscape, from coding to remediation and mitigation. Thus, you need to make sure that you look for all vulnerability-related guidance while planning your PCI-driven vulnerability management program. While focusing on vulnerability management, don't reduce it to patch management – do not forget custom applications written in-house or by partners. You need to have an ongoing program to deal with discovered vulnerabilities. Wherever you can, automate the remediation of discovered vulnerabilities and focus on what you cannot. Finally, make sure that you recheck for fixed vulnerabilities after they are reported to be fixed.
References
[1] Qualys website, www.qualys.com; [accessed 12.07.09].
[2] Williams, AT; Nicolett, M., Improve IT security with vulnerability management. Gartner research note, May 2, 2005, www.gartner.com/DisplayDocument?doc_cd=127481; [accessed 8.8.09].
[3] Prioritized Approach for PCI DSS 1.2, www.pcisecuritystandards.org/education/prioritized.shtml; [accessed 12.07.09].
[4] Validation Requirements for Approved Scanning Vendors (ASV). PCI Council, www.pcisecuritystandards.org/…/pci_dss_validation_requirements_for_approved_scanning_vendors_ASVs_v1-1.pdf; [accessed 8.8.09].
[5]
[6] Technical and Operational Requirements for Approved Scanning Vendors (ASVs). PCI Council, www.pcisecuritystandards.org/…/pci_dss_technical_and_operational_requirements_for_approved_scanning_vendors_ASVs_v1-1.pdf; [accessed 8.8.09].
[7] Qualys Criteria for PCI Pass/Fail Status, www.qualys.com/products/pci/qgpci/pass_fail_criteria/; [accessed 12.07.09].
[8] Security Scanning Procedures., Version 1.1. PCI Council (2006).
[9] Qualys Laws of Vulnerabilities Research, http://laws.qualys.com; [accessed 12.07.09].
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset