© Marvin Waschke 2017

Marvin Waschke, Personal Cybersecurity, 10.1007/978-1-4842-2430-4_8

8. What Has the Industry Done?

Have They Made Any Progress?

Marvin Waschke

(1)Bellingham, Washington, USA

When we read about half a billion passwords stolen from a sophisticated Internet service, it might hard to believe, but software is more secure today than it ever has been before. The industry has come a long way.

The Security Legacy

The first commercial software project I worked on in my software career did not have a quality assurance program. When sales sold a copy of the system, the development manager went around to the developers’ desks gathering a handful of eight-inch floppy disks containing the developer’s current best work. The handful of disks went to the new customer. Often, a developer went out to the site with the floppies to put the code together and get it running.

When a customer had an issue, a developer was likely to go back on site to diagnose the problem and write code to fix it on the spot. Consequently, no two customers were certain to have the same code. A defect fixed for one customer was not always fixed for other customers. Learning to use the software often amounted to learning to dodge the pitfalls and landmines of the implementation.

After I had worked there for about a year, management asked one of the technical writers to take on product testing after she complained that the software was difficult to document because it often did not work. I remember developers grumbled about fixing things the tester found: little nit-picky things like a miskey that would erase critical data files without warning or the system inexplicably dying and forcing the user to start over. Today, defects like these would be called show stoppers and would be instantly escalated to the highest level. In those days, users were just grateful to be able to process their accounts receivable in a few hours instead of working weekends to get the bills out on time. A few bugs were nothing compared to the days of drudgery that the computer eliminated.

I cringed while writing the paragraphs above because the practices at my old employer were so far from current software development management practice . Yet, this was a successful software business that still flourishes where others have failed. At that time, they produced well-respected products, which delighted their customers, despite deficiencies in development methodology and what we would today call dismal product quality. Good customer service compensated for the many flaws and defects in the products, but more than anything else, expectations were much different. Customers were willing to put up with almost anything to get the productivity jumps they craved, and a few bugs were an acceptable price for the productivity gain.

Of course, the practices of my former employer have evolved with the industry. Today, a product built in the old cowboy style would be considered amateurish and impossibly expensive to support, falling far below customer and investor expectations.

Software development and customer expectations of reliability and consistency have changed. Practices that worked when expectations were limited and customer bases were small are not even close to acceptable now.

The same has happened to computer security. In the days of isolated personal desktops and limited networks, security did not have to be built into software. Locks on the doors and windows were enough. And when the locks were broken, the intruder probably did not know what to do with the computing gear anyway. As devices became connected and computing became more exposed to outsiders, security became more important, but old habits change slowly and security was slow to receive proper attention in many mainstream products.

As the importance of security began to be acknowledged, engineers still tended to think of it as an add-on. When a project began, security would be given a prominent place in project plans. But as the project progressed, strange things would happen. Security is often a hindrance to rapid development. Much of development consists of making a small addition, testing it, observing the effect, correcting it, and making more additions. This cycle is repeated many times, often many times an hour, in developing a product. A little thing like entering a password, updating a security token, or the like, are irritating time sinks in the cycle. Developers are always impatient and canny. They find ways avoid these annoyances. They might add a backdoor to bypass security, or a switch that turns off security. Or, most dangerous of all, decide to put off writing security checks until the module is working properly without them.

Of course, typical software projects all begin to slide, that is, get behind schedule. Usually, to bring the project back on schedule, unnecessary features are trimmed and the project is streamlined. At this point, the temptation to scale back security plans and leave in temporary security holes is strong. Often security features are relegated to the next release, which could be a long time coming. After all, the customers are clamoring for functionality. Only a minority care about security. Until a breach occurs, security is an annoyance, not a feature. Given a choice, most product managers will choose a feature that might capture customer attention and garner sales, and they pass on boring, annoying, security checks. Consequently, products, especially consumer products, were often released with weak or no security built in.

These products, utilities, and operating systems become sitting ducks for criminal hackers . These products are the legacy we have today.

The Turning Point

As you saw in the last chapter, prosecution of cybercrime is becoming more effective, but the challenges are still tough. Too many clever cybercriminals get away with their crimes. On the other hand, today’s hardware and software has been made more resistant to intrusion and more resilient to attack, so that the cybercriminal’s job has become more difficult.

It’s hard to pinpoint a date or name an event, but, around 2000, the tide began to turn. The industry began to realize that cybersecurity was not being taken seriously enough to keep up with the increasing penetration and criticality of the role of computers in almost every aspect of culture and society. At the same time, cybercriminals were becoming more active and visible. The computing industry became aware that cybercrime and lack of security could be a significant deterrent to current and future business.

It is no surprise that security consciousness grew as the Internet began to be a necessity in homes and businesses. Some of the contradictions between a free and open Internet and safe and reliable computing had become evident. Networked computers had become the norm, and criminal hackers were building steam. And it was becoming evident that enforcing cybercrime laws is demanding and requires training and resources that are not easy for law enforcement to obtain.

The industry has stepped up. Like traditional crime, cybercrime can be controlled in two ways. You can catch more housebreakers when the police have more personnel, better 911 service, faster patrol cruisers, and more ­effective crime scene investigation tools. Alternatively, you can make house breaking harder to commit and easier to prevent by designing better locks, more secure windows , and motion detectors.

The industry has invested heavily in tools and techniques that make computing more secure and private. Cybercrimes have become harder to commit and easier to stop.

The Microsoft Security Development Lifecycle

Microsoft is a representative of the changing attitudes in the software industry. It was certainly a prominent software vendor at the turn of the millennium, and it was also quite typical. Software and hardware companies realized that old practices that ignored or downplayed vulnerabilities would simply not do. On January 15, 2002, Bill Gates sent an email to Microsoft employees that would be called the “Trustworthy Computing Memo.”1 In the memo, Gates stated that security, availability , and privacy were the new priorities for Windows development. Issues in these areas would be fixed first, before anything else, and would be the first considerations in all designs. Shortly thereafter, in an astonishing assertion of the seriousness of the directives in the memo, the Windows development division shut down while developers received training in secure and reliable coding and design practices.

Later, Microsoft developed a set of security guidelines, in the form of a documented development process , which it published as “The Microsoft Security Development Lifecycle (SDL) Process Guidance.”2 It also made publicly available many of the tools it uses for testing. With few exceptions, all Microsoft development teams are required to follow the guidance. In addition, other companies in the industry, such as Cisco and Adobe, have chosen to adopt the Microsoft SDL .

The choice of a process rather than a set of security features or rules was an important and wise choice. Security practices and mechanisms change as the industry progresses. Processes change also, but their most important property is the ability to adapt to evolving technology while preserving core goals.

The SDL is a classic feedback cycle (See Figure 8-1) that begins with security training for the development team. All participants are required to participate in continuing security training and maintain adequate levels of certification for their role in the project. Next, security requirements are established, including plans for testing and assessment. The design stage includes analysis of attack surfaces and models of potential threats. During implementation, approved tools and components are used and best coding practices are followed. Code is reviewed by peers. The completed code is verified through various forms of testing and analysis, including tiger teamor white hattesting in which professionals play the part of hackers and attempt to invade the product. After the project is released, the behavior of the product and user issues are recorded and classified in preparation for the next round of the cycle.

A416354_1_En_8_Fig1_HTML.jpg
Figure 8-1. The Microsoft Security Development Lifecycle is a classic feedback loop

The original SDL was formulated as a waterfall modelin which each stage is separated from the next as the development flows over an irreversible waterfall from one stage to the next, often over two or three years. This development methodology is not used as much today as it once was. Now, developers prefer use agile development. They break projects down into smaller efforts called sprints that are completed and released in a few weeks. Each sprint results in a working product with more features than the previous sprint. As features are added, the design is adapted to the requirements revealed by the results of the sprint. The SDL has been adapted to more rapid development by performing the entire SDL for each sprint. Tailoring the SDL to the precise features added in the sprint reduces the overhead of repeating the SDL many times.

The details of the Microsoft SDL are important to software developers because they define a set of best practices for developing secure software. SDL is not the only such guideline. Most established software companies have similar processes. Often, the Microsoft SDL is the model for security processes, but it is not the only model. Groups and agencies such as the Software Engineering Institute, the Federal Aviation Agency, and the Department of Homeland Security have all published guidelines for secure software development.3 These guidelines have different emphasis and details, but they are all similar.

For personal computer users, the guidelines themselves are dry stuff and not of much interest. However, the guidelines are crucially important to computer users in one way: SDL and similar guidelines have made the life of hackers and cybercriminals much harder. The level of hacking, both in quantity and quality, at the turn of the millennium was insubstantial compared to today. Without processes like SDL in place for the last decade, computing would be in a sorry state under today’s onslaught. Everyone deplores the weaknesses that remain in software today, but the truth is that if the guidelines had not been formulated and followed, the bar for a successful hacking exploit today would be much lower.

There is another aspect of security guidelines that is important to personal computer users: the guidelines work. Software that is produced under a strict security design and implementation regimen is safer than software that is not. Large companies like Microsoft, Google , Oracle , IBM , Apple , and so on follow SDL or similar processes. Good practices do not guarantee that software and hardware from these companies have no security defects; we all know that flaws occasionally appear in all vendors’ products, but software that is not built following a security conscious process is much more likely to have hackable flaws. In addition, flaws in software developed under security guidelines are usually easier and quicker to diagnose and repair because the guidelines assume that flaws will be found and the software is designed to be patched. The guidelines include planning and budgeting for regular development and delivery of security patches.

Lone developers, or developers in a small shop or start-up, often do follow secure practices, but caution is needed. In a startup, a week of development time can mean life or death for the nascent business. This kind of pressure is not conducive to good security decisions. Organizations that are ignorant of or indifferent to good security practices do exist. Conventional products that are being connected to the Internet by designers and engineers who are not familiar with network security may inadvertently produce insecure designs. Apps from sources without known reputations for security must be treated with caution. The major vendors have public track records and reputations they are loath to lose. Does that mean you should never use an app from a small developer whom you have never heard of? No. There’s great and safe software out there from small developers, often at higher quality and better price than the offerings of the big guys. But you should realize that there is some risk. Chapter 9 discusses some practices to follow when installing software from a less well-known source.

Patch and Update Processes

Perhaps the most important consequence of secure development processes is the development and proliferation of automated patch and update processes. One clue that the development team paid attention to security in a product is the presence of an automatic update process. We now have over 50 years of experience writing software and we should know by now that practically useful computer applications always are vulnerable to security breaches. No computer can ever be expected to be totally secure and error-free. Vendors can eliminate many issues by following development best practices , but these practices must include an active and on-going search for security flaws after the product is released. In addition, when defects are found, mechanisms must be in place to develop patches for the flaws rapidly and disseminate the patches expeditiously.

Why are automated patches and updates so important? When processes are automated, computer users no longer have to worry about applying the latest patches and updates. No one likes to find out that they have been stung by a virus that would have been stopped if they had remembered to apply the latest patch.

At many large corporations, patches must be tested with customized and bespoke application s4 to make sure the patches won’t cause problems with critical production processes. The IT departments in these organizations frequently have entire teams who do nothing but test and manage the application of patches and updates. Without automation, individuals would have to do something similar, although individuals usually don’t have customized and bespoke software to contend with, the amount of time and skill required for sound patch and update management is not trivial. For most of us, the time required would drive us to neglect this critical activity, and it would not be long before we landed in the soup. The security landscape moves so fast and our software is so complex that an individual user cannot keep up with everything even on the phones they hold in their hands. Automation takes the onus off the individual to choose when to apply patches.

Zero-Day Exploits

Zero-day exploits, attacks that exploit a flaw in a system that was previously unknown and therefore unstoppable, are rare. Government agencies such as the U.S. National Security Agency, law enforcement such as the Federal Bureau of Investigation, and military cyberwarfare groups are willing to pay millions of dollars for powerful zero-day exploits. Software and hardware vendors are also willing to pay for these. And it almost goes without saying that there is a ready black market on the criminal darkweb.

Zero-day exploits are scarce and becoming scarcer for two reasons. The first is that the industry, using better development methodologies and architectures, is producing more secure products with fewer exploits to be found. Second, the industry has built a robust system for discovering security flaws and patching them.

In a perverse way, the value of these zero-day exploits is increased by more secure architectures and powerful automated patching and updating services that are now available for all major operating systems and software. Today, zero-day exploits are hard to find and they don’t stay zero-day for long. During their short life span, these exploits can be extremely valuable to an organization that knows how to use them to further their legitimate or illegitimate goals. High value, scarcity, and short life-span combine to induce competition that drives high prices.

A large network of researchers are continuously searching both for zero-day exploits and the effects of zero-day exploits. Many companies offer bounties to these researchers when they find a new flaw in the company’s products. When the exploits are found, they are typically fixed and communicated to the security community.

Public and Private Vulnerability Management

Zero-day exploits are just one kind of vulnerability that must be managed to make computing safe. Vulnerabilities are flaws or weaknesses in software than can be used to violate the security policies of a system. For example, a flaw that causes a program to produce incorrect output is a flaw or defect, but it is not a vulnerability unless the bogus output could cause or permit a security violation. Frequently, a vulnerability will shift an application into a state in which it yields control to an unauthorized invader or hacker .

In the United States, a set of government and non-government institutions have grown up to coordinate responses to situations where computer security is compromised. These institutions work in concert with private enterprise, academics, and the public to discover and rectify vulnerabilities, flaws that may lead to security breaches. Unlike the FBI and other law enforcement agencies, these institutions do not exist to catch criminals and bring them to justice, although they frequently help law enforcement catch criminals. These groups main concern is making cybercrimes impossible.

Their approach is two pronged: when a security breach occurs that is not obviously due to a previously unknown vulnerability or technique, these groups examine the breach and determine if it is a new vulnerability. Then they work with the vendor of the system to develop a remedy. These groups also work to discover vulnerabilities that have not been used in an actual breach, or present theoretical flaws that could cause a vulnerability in some future situation. These too are documented and made available to systems vendors who may develop fixes or use theoretical deficiencies to strengthen future designs.

National Vulnerability Database (NVD)

The heart of the system is the National Vulnerabilities Database. The NVD is maintained by the Department of Homeland Security and the National Institute of Standards and Technology (NIST). In the NVD, a vulnerability is assigned an overall severity plus a ranking for exploitability, how technically difficult an exploit would be required to take advantage of the vulnerability, what kind of damage could be done, and so on. The entry also contains references to advisories, solutions, and tools that may be used to detect, research, or remediate the vulnerabilities. The NVD depends on Common Vulnerabilities and Exposures dictionary (CVE) to unambiguously identify vulnerabilities. Each vulnerability in the NVD has a corresponding entry in the CVE. See Figure 8-2. For more information on the CVE, see below.

A416354_1_En_8_Fig2_HTML.jpg
Figure 8-2. The National Vulnerability Database is a central repository for computer security vulnerability information that catalogs vulnerability issues and solutions

The NVD is a go-to public database open to anyone who wants to know about current software vulnerabilities. The team responsible for the NVD is the United States Computer Emergency Response Team (US-CERT) . The US-CERT was formed to help protect U.S. government agencies from cyberattacks, but it has grown beyond that limited role to act as a central clearing house for coordinating the engineering battle against all cybercrime .

Computer Emergency Response Team Coordination Center (CERT/CC)

CERT/CC is a division of the Software Engineering Institute (SEI). SEI is a private foundation that is funded by the US government and is based at Carnegie Mellon University. CERT/CC was formed in the late 1980s when cybersecurity began to emerge as an important topic. It is a research organization devoted to cybersecurity. CERT/CC predates the US-CERT . Although the two teams work together, they are distinct and have different functions.

CERT/CC brings together government, academic, and private engineers and researchers who focus on computer security-related issues and practices. They produce research papers and participate in security conferences. They collect and analyze data on security threats and solutions, develop security tools, and perform security analysis. They support the private sector as well as US government agencies such as the Department of Defense and Homeland Security.

CERT/CC collects, analyzes, and coordinates responses to vulnerabilities. The process begins with a report submitted to the CERT/CC site. Vendors, academics, independent researchers, the public, and CERT/CC staff all submit reports, which are examined, cross-referenced, and prioritized by severity. Usually, vendors are informed and given a chance to remediate the vulnerability before the reports are made public. When reports are made public, they are published in the CERT/CC Vulnerability Notes database.

Common Vulnerabilities and Exposures Dictionary (CVE)

MITRE Corporation is a non-profit research organization. MITRE maintains CVE, which is a list of publicly known computer security vulnerabilities. CVE assigns standard identifiers for vulnerabilities. Standard identifiers facilitate cooperation among investigators, researchers, engineers, and vendors in addressing vulnerabilities.5

The community working to reduce and eliminate vulnerabilities is large. MITRE Corporation is a US organization that works extensively with the US government. Major software and computing hardware vendors, such as Microsoft, Apple , Google , Intel , Cisco all participate in various capacities. However, the CVE community is international with contributors from many countries. The International Telecommunications Union (ITU) has adopted CVE as part of its standards.6

Without some form of unique identifiers, descriptions of vulnerabilities can easily be ambiguous. If vulnerabilities are not clearly and unambiguously identified, fixes can be misapplied, work duplicated, and issues overlooked. Obtaining a CVE identifier is the first step taken after a vulnerability is discovered. Often, the identifier will be assigned by the vendor of the vulnerable software. For example, Microsoft is the numbering authority for Microsoft issues and Hewlett Packard is the number authority for HP issues.

CVE-ID documents contain a brief description of the vulnerability and references to other documents that help identify the vulnerability. CVE entry also references the NVD . The CERT/CC Vulnerability Notes also relies on CVE identifiers .

US-CERT , CERT/CC , and Private Enterprise Working Together

The current mechanism for identifying and remediating computer vulnerabilities relies on a community of both private and public groups that work together with the goal of discovering and remediating vulnerabilities, both in the US and internationally. The cooperation works in a lot of different ways and is quite flexible. Rather than try to explain the relationships, here is an example that shows many parts working together.

The scenario begins with a small security consultancy, possibly only a principal or a few partners. In the last few years, many software companies have established bounty programs that pay researchers for discovering security flaws in their software. This practice has given rise to a swarm of bounty hunters who get some income from finding flaws and collecting the bounties.

Crosswalk is an Intel product for developing apps that will run on both Android and Apple iOS . A security researcher from a small consultancy discovered a flaw in Intel’s implementation of Crosswalk that could be exploited by a hacker to launch a man-in-the-middle attack that would allow the hacker to listen in and interfere with supposedly secure communications.

The researcher, or perhaps bounty hunter, reported the vulnerability to Intel . Later, the researcher reported the vulnerability to CERT/CC. CERT/CC became involved and mediated communication between Intel and the researcher. CERT/CC obtained a CVE identifier for the vulnerability so it could be unambiguously cross-referenced in the community. Intel fixed the problem and sent the fix to the researcher. The researcher tested and confirmed the fix, and a public disclosure date was set. Until the disclosure date, the security researcher, CERT/CC, the CVE, and Intel kept the vulnerability confidential to discourage hackers from using the vulnerability before a fix was available. On the disclosure date, Intel published an account of the vulnerability and fix on their website and CERT/CC issued a public Vulnerability Note. Two days later, US-CERT published a Vulnerability Summary for the assigned CVE-ID on the Homeland Security-NIST National Vulnerability Database . The Vulnerability Notes, the NVD, and the CVE all cross-reference each other.7 See Figure 8-3.

A416354_1_En_8_Fig3_HTML.jpg
Figure 8-3. Example steps from discovery of a vulnerability to publication in the National Vulnerability Database

This vulnerability and resolution, which was chosen almost at random, is an example of cooperation between private enterprise, government agencies, and non-profits addressing significant security vulnerabilities before they become issues.

The vulnerability was rated as medium severity with a high exploitability score. In other words, worse vulnerabilities are possible, but this one would have been easy to exploit if a hacker discovered it. As far as anyone knows, the flaw discussed here was never exploited by a hacker, although it easily could have been.

In this case, the vulnerability was resolved by a patch from the vendor. If the vulnerability was a virus or other malware, the antimalware tool developers may respond by adding code to their tools to catch the malware in their scans. IT departments might have been alerted to watch for certain symptoms on their systems.

This example shows the importance of the automatic updating and patching mechanisms that are included now in most operating systems and software products. A vulnerability like the one described is important to users when a hacker takes advantage of it to steal information or wreak other havoc, but until that occurs, the vulnerability is benign. Uninformed users might even complain about the fix because they might see warning messages that they had not seen before. The symptoms of a man-in-the-middle attack are usually subtle and users may unknowingly prefer suffering the attack to the annoyance of repeating a transaction after an attack is stopped.

Under circumstances like these, without knowing about the vulnerability and its potential, end users have little incentive to apply this Crosswalk patch. Waiting for the next release is on the path of least resistance, but waiting also falls into the hands of hackers , giving them a long window in which to exploit the weakness. The CERT/CC Vulnerability Notes and the US-CERT National Vulnerability Database ordinarily do not release information on the issue until some fix is available but fixes do not always get applied promptly. Hackers are in the habit of scanning databases for new opportunities. Users who do not put fixes in place promptly are open to attack. Hackers do often find their own vulnerabilities to exploit, but they also take advantage of the vulnerability databases.

Automated patching and updating does a lot to shorten the susceptible period between publication of the vulnerability and fixes being applied to users systems. The user is usually the least informed player and the least capable of making informed decisions about what to patch or update and when to do it.

Security Theory

People have a natural tendency to think of security in a piecemeal fashion. When a criminal breaks through a fence, we rush to repair the hole in the fence, which is an obvious and expedient reaction, but it may be wiser or more cost efficient to examine the problem at a higher level as perimeter enforcement rather than fixing a hole in the fence. The more abstract view could lead to replacing the fence with a different kind of barrier, adding a motion detector to the alarm system, or moving valuables to a different location. The conclusion may be to fix the hole in the fence, but ignoring alternatives could also be a dangerous waste.

One way to look at computer security on a more general level is to examine the where computing is vulnerable on a general level. The goal is to realistically predict where computing is vulnerable and what is at risk before code is written and to take steps to preserve security in the face of unpredicted exploits. This is a complex subject that will only be touched on lightly here, but the subject is important because it helps sort out where to look for security issues and make sense of the effort (or lack of effort) by developers to improve cybersecurity.

The Security Triad

At the foundation of most security theory is a triad of goals: confidentiality , integrity , and availability . See Figure 8-4.

A416354_1_En_8_Fig4_HTML.jpg
Figure 8-4. Security is a triad of goals

When a system is confidential, data and processes are only available to actors who have a legitimate right to access. Only the owner of a bank account and bank officials should have access to the amount in the account. Other account holders and bank employees not specifically assigned to managing the account should be barred from access.

When a system has integrity, data and processes will only be affected by authorized actors in a regulated fashion. A $100 deposit should never disappear or change to $75 dollars without a clearly defined and authorized audit trail and reason.

Finally, a system has availabilitywhen legitimate actors can get to data and processes in an orderly and predictable manner. If an authorized user is promised access to their account balance at 2 p.m. on Tuesdays, the user should, barring extraordinary events, always have access at the promised time.

As a computer user, the security triad can help you systematically review the security of features of a new laptop, tablet, or smartphone. They can help point out the weaknesses and strengths of applications you consider for installation. They can help you evaluate the safety of items from the Internet of Things you might place in your home or office and connect to your network.

When I consider installing a new application, I ask myself if the application exposes data in new ways, raising confidentiality issues. The sharing features of the newer versions of office tools like word processors are an example. I was prompted to look at the default access permissions and make sure information will not be exposed that I do not intend to expose. I then go on to ask questions about integrity and availability . This is an effective way to avoid security surprises with new functionality.

A key problem for computer security theorists is to discover ways in which confidentiality , integrity, and availability can be built into computer software and hardware in such a way that building applications that violate these principles is difficult or impossible. Computer operating systems such as Microsoft Windows , Apple iOS , OS X , Android , or Linux are the point where software and hardware come together, so operating systems are the focus of much of this effort.

Threat Modelling

One of the techniques developers use to design and construct more secure systems is called threat modelling. A developer, or group of developers, sit down to imagine all the threats that a system may be subject to and the consequences if the threats were carried out. The security triad provides a systematic pattern for thinking about threats.

Simply taking time to imagine the possible threats is a big step forward from the old days when security was an afterthought. Developers have taken threat modelling beyond a lightly structured “what if” exercise. The details of threat modeling techniques vary but they all identify the data processed by the system, the users of the system, and who the adversaries of the system might be and what they might be after. They also identify how data moves through the system and where it is stored. The next step is to spot the points where the system is vulnerable. That is easier than it may appear because almost all vulnerabilities occur when data moves from one module to another. One of the key tools in threat analysis is a data flow diagram that delineates the flow of data from one module to another.

With the system assets and points of vulnerability all listed out, the threat modelers evaluate each threat, rating them by the amount and seriousness of damage they could cause. The results of this evaluation are fed back into the project plans and developers are assigned to mitigate the threats.

Threat modeling is usually an iterative process. Modeling is repeated during development, continually modeling new threats as they may appear and testing the mitigation of old threats. This process replaces the old plan, where a developer would be assigned to code a solution to a security defect whenever defects happened to show up in testing or in the field, but no one systematically looked for vulnerabilities and developed plans for eliminating before they made it to testing.

Control Flow Integrity

One such effort is control flow integrity. Control flow integrity is important because it is an attempt to build resistance to one type of system attack into the operating system and application code, rather than address individual flaws.

Chapter 3 discussed how most computer hardware supports security by only allowing the most powerful and potentially destructive computer instructions to be executed when certain conditions are met.

Control flow integrity adds a layer of sophistication to the operating system software that determines the conditions when privileged instructions will be executed by looking at the flow of control from one software section to another. Researchers have identified patterns of shifts in control that indicate a program is doing something it was not intended to do.

For all the complexity of software and hardware, each core in a running computer is simply executing one instruction after another. The mechanisms that are used to determine which instruction will be executed next can be quite intricate, but they all answer a simple question: what next? If a hacker can insinuate a change into the control mechanism that will start the computer executing his sequence of instructions and abandon the legitimate sequence, the hacker has won and the computer is pwned.

There are many ways to trick your way into the control mechanism. One way is to throw a monkey wrench into the whole thing by stuffing more data into input than the program expects. Done with finesse, the excess data may be loaded into the control mechanism and will hand over control to the hacker .

If an application is programmed well, the excess data would be thrown out, either quietly or with an error message, but not all programs are written well. In fact, checking for buffer overruns(jargon for too much data) was not common until buffer overrun security breaches began occur regularly. Now, checking for buffer overruns is a routine part of most quality assurance testing, and software engineers are trained to prevent them in their code.

Buffer overrun vulnerabilities are becoming less common, but they have not been eliminated. Legacy code still exists with unpatched buffer overrun vulnerabilities, and buffer overruns can occur in subtle ways that good engineering and quality assurance practice can miss. Further, buffer overruns are by no means the only way control can be diverted to invaders.

Control flow integrity does not address how control is hijacked from its legitimate path. Instead, it detects when the control goes awry and raises a flag. No matter how the system was rigged by the hacker , if the program strays, control flow integrity detects the misadventure and guides it back to a safe path.

Enforcing control flow is a way of approaching the problem at a higher level. Rather than eliminate buffer overruns that cause control flow misadventures, control flow integrity measures detect deviations in flow control and stops the deviation. For example, Microsoft Windows 10 supports a feature called Control Flow Guard , which is an example of forcing control flow integrity. Developers use features built into the operating system to write applications that detect when the flow control of their code has been diverted from its intended direction.

Although Control Flow Guard does not protect flow control from every attack, it does make code more resistant. Developers must compile Control Flow Guard into their code. When used properly, Control Flow Guard provides an additional layer of protection for code in addition to the defensive and coding practices that security conscious developers have been following for a decade. Since it is a new feature, we do not know how effective it will be in preventing hackers from gaining control of applications, but if it is successful, there will be fewer exploits and security patches, which will mean that computing will be safer for everyone.

Control Flow Guard is an example of applying a high-level approach to attempt to eliminate entire classes of exploits. Most software and computer manufacturers are working hard to address security in this fashion. Microsoft is not unique in their diligence. Apple and Google are making similar efforts in iOS , OS X , and Android operating systems .8

An ideal computing system would be impossible to subvert. That would mean that the system would only ever be run by authorized users, the system would always do exactly what it was designed to, the system would only work with authorized input, and all output would always arrive at authorized targets and never be misdirected to unauthorized targets.

Footnotes

1 “Memo from Bill Gates,” January 15, 2002. https://news.microsoft.com/2012/01/11/memo-from-bill-gates/ . Accessed September 2016.

2 The latest version can be downloaded at www.microsoft.com/en-us/download/details.aspx?id=29884 . Accessed September 2016.

3 For an overview of published process guidelines see Noopur Davis, “Secure Software Development Life Cycle Processes,” Department of Homeland Security, Build Security In, Setting a Higher Standard For Software Assurance, July 13, 2013. https://buildsecurityin.us-cert.gov/articles/knowledge/sdlc-process/secure-software-development-life-cycle-processes#tsp . Accessed September 2016.

4 A bespoke application is written specifically for a given customer. Large enterprises often have bespoke applications that are written in house or by third parties to address the enterprise’s unique requirement. Sometimes, a bespoke application is a commercial off-the-shelf (COTS) product that has been modified to meet special requirements. Bespoke applications often cause extra expense and security issues because the issues are unique and not identified or mitigated in the industry-wide environment.

5 For more details about the CVE organization see Common Vulnerabilities and Exposures, “About CVE,” http://cve.mitre.org/about/ . Accessed September 2016.

6 See “ITU-T Recommendations, ITU-T X.1520 (04/2011),” April 20, 2011. www.itu.int/ITU-T/recommendations/rec.aspx?rec=11061 . Accessed September 2016.

7 The details are in the following: Vulnerability Notes Database, “Vulnerability Note VU#21781,” July 29, 2016. www.kb.cert.org/vuls/id/217871 . Accessed September 2016. Nightwatch Cybersecurity, “ Advisory: Intel Crosswalk SSL Prompt Issue [CVE 2016-5672],” July 29, 2016. wwws.nightwatchcybersecurity.com/2016/07/29/advisory-intel-crosswalk-ssl-prompt-issue/. Accessed September 2016.

National Vulnerability Database. “Vulnerability Summary for CVE-2016-5672,” July 31, 2016. https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2016-5672 , and “Crosswalk security vulnerability,”

https://blogs.intel.com/evangelists/2016/07/28/crosswalk-security-vulnerability/ . Accessed September 2016.

8 Don’t confuse Microsoft Control Flow Guard with network flow control , which addresses network congestion problems. The two are very different.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset