© Raymond Pompon 2016

Raymond Pompon, IT Security Risk Control Management, 10.1007/978-1-4842-2140-2_2

2. Assume Breach

Raymond Pompon

(1)Seattle, Washington, USA

When intelligence folks smell roses, they look for the funeral.

—MI5 Director-General Jonathan Evans, Address at the Lord Mayor’s Annual Defence and Security Lecture, London, UK, June 25, 2012

A security professional should expect and plan for things to go wrong, especially when hostile parties are constantly attempting to break their engineering constructs. This concept is as old as the history of warfare and defensive engineering.

The Lesson of Fort Pulaski

Near the coast of the state of Georgia sits the picturesque city of Savannah and the Savannah River. The Savannah River stretches out into the Atlantic Ocean through an estuary of brackish waters dotted with small islands. The size and depth of the river have made Savannah a major seaport since before America was a nation. After the War of 1812 and the British rampage through Washington, DC, the US government realized that its major ports needed protection from sea attacks. President Madison ordered the construction of a fort on Cockspur Island—a strategically perfect location in the southern channel of the Savannah River—to stop invading ships bound for the port. Ringed with marshes and oyster beds, Cockspur Island was only nine square miles but big enough to build a military stronghold.

The Invincible

In 1829, the US Army Corps of Engineers constructed Fort Pulaski, an impregnable fortress named after a hero of the American Revolution (see Figure 2-1). In addition to the natural defenses of the Savannah River territory, which included swamps filled with native alligators, the US Army added an impressive series of defenses. At the cost of more than 26 billion in today’s dollars, they laid 25 million bricks to create walls that were seven feet thick—strong enough to repel any known bombardment. Inside, engineers braced the walls with thick ramparts to buffer artillery shells that made it through or over the walls. Outside, a moat that was seven feet deep and 32 feet wide surrounded the fort with only one entrance via a drawbridge. Beyond the moat lay a flat open plain. Any soldiers landing on the beaches of the island would have to race across hundreds of yards of open plain while under constant fire from 48 smoothbore cannons covering the entire circumference of the fort. Land assault would come with high casualties. The cannons’ effective range stretched nearly a thousand yards, far enough to hit any ships passing up the river heading toward the city of Savannah.

A417436_1_En_2_Fig1_HTML.jpg
Figure 2-1. Fort Pulaski

Figure 2-2 shows the range of the Fort Pulaski guns. The nearby shore was out of range in both directions, so only ships on the river could fire on the fort and be fired upon. But the ships would not have the protection of seven-feet-thick walls to protect them. The fort was impenetrable.

A417436_1_En_2_Fig2_HTML.jpg
Figure 2-2. Gun ranges of the fort

Ownership Changes Hand

Things never go as you planned and foreigners never invaded the US coast. Instead, the state of Georgia seceded from the Union in February of 1861 and joined the Confederate States of America. The Confederate Army took over the port of Savannah and stationed 385 troops in Fort Pulaski. Now a key port town in the US Civil War, the US Army found themselves in the undesirable position of having to capture their own impregnable fort. By 1852, the Union Army had captured the nearby Tybee Island to use as a staging ground for assault. The Union sent several ships to test the defenses, but the scathing cannon fire quickly them turned back. The Union generals had two bad options: starve them out or besiege the island with a massive strike force and accept huge casualties.

New Exploit Technology Is Introduced

Maybe there was another option. The history of engineering is tied to the history of warfare, as the problems of war inspire new technologies. A former engineer, Brigadier General Quincy Gilmore had a crazy new idea. He knew there was new technology that the Union Army could use on Fort Pulaski. First tested only a few years earlier, no one had ever tried the newly developed rifled cannon in battle. Unlike the smoothbore cannon, the rifled cannon had a grooved barrel that forces the shell to spin through the air like a football. This spin acts like a gyroscope to stabilize the path of the shell, giving the cannon longer range and a straighter path (smoothbore, on the other hand, acts more like a fastpitch baseball with high pressures at the leading edge, forcing curved trajectories). General Gilmore placed his guns on the shores of Tybee Island, which were nearly 1,700 yards away from the walls of Fort Pulaski. He took his time setting up his artillery pieces in the marshy banks, safely out of range of the smoothbore guns of the fort. Figure 2-3 shows the gun ranges of Gilmore’s shore cannons .

A417436_1_En_2_Fig3_HTML.jpg
Figure 2-3. Gun ranges of the Union rifled cannons on the shore

After a final ignored plea to surrender the fort, Quincy opened fire on the morning of April 10, 1862. With only three rifled cannons, he fired shell after shell into one particular spot on the wall of the fort. After only 30 hours, shells punched a neat hole through the wall. Fearing that more shells would ignite the fort’s powder magazine, the Confederate garrison surrendered and the Union Army took the fort. Not a single life was lost.

This was the first true test of the Fort Pulaski’s defenses and it failed catastrophically. If you visit Fort Pulaski today, you can see the hole. The world was stunned by this historic military victory. Armies everywhere had to rethink their defenses because of the rifled cannon. As shocking as this event was in 1862, this was neither the first nor the last successful defeat of a so-called perfect defense. Even in contemporary times, we often hear murmurs of fear regarding a “digital Pearl Harbor,” tying the worry about the weakness of current IT systems to our last major military disaster. What we may think of as invulnerable can easily become fatally weak in the face of technological advancement. In the IT profession, our advances come daily instead of over decades. It’s a safe bet to assume that an attacker will eventually breach your network, with all its best defenses. This is further complicated by the intricacy of constantly shifting IT systems and the defenses of typical organizations.

The Complexity of IT Systems

We know that the Internet is far more complex than just HTTP servers and browsers, but to non-IT folks the Internet is only the Web. Based on that worldview, let’s just look at only the Web. The mechanisms involved are so dizzyingly complex that the sum of all the components involved is beyond any single human’s complete understanding.

The act of opening a single web page is an intricate collection of interwoven technologies, standards, and transactions. You type a site’s name into your browser’s URL bar, let’s say www.apress.com , and press Enter. Your local machine needs to do a Domain Name Server (DNS ) query to match www.apress.com to an IP address. To do this, it sends out a DNS query to whatever DNS server that was assigned to your machine. That server does not likely know the answer, so it must pass the query upstream to another DNS server. This goes upstream again to the “.com” DNS authoritative servers (wherever they are), which then pass it to the authoritative “apress.com” DNS server run by Apress, which then returns the IP addresses resolve to the “www.” web server.

Networks on the Internet hand this IP address, server by server, to your client to use. DNS client and server software governs all of these interactions using standardized DNS calls and answers over specific ports and protocols between many different parties. The entire design of DNS is a highly distributed database run by everyone with a domain name on the Internet. Right now, that number is in the hundreds of millions. Anyone with their own DNS server is part of that system, which means that DNS is an agreed-upon naming and lookup system run by millions of participants; none of which you have influence or oversight on, but must all work together for the Internet to deliver a web page.

But wait, I was simplifying. I skipped an entire lower-layer set of transactions involving how Ethernet and IP addresses are tied together and routed locally and on the Internet. In this case, we are talking about two dozen or so top-tier global networks and ten thousand ISPs sending around two billion terabytes of information a day. All of this and we haven’t even gotten started on opening a web page yet.

Now that the browser has an IP address , it can request a web page. The browser establishes a TCP session with the web server by issuing a request to connect, which travels over the web ports, protocols, and the local Ethernet LAN, through the local router, up to the local ISP, over the Internet, and down to the Apress web server. The web server completes the TCP handshake , with more back and forth over the Internet, to eventually establish a connection. Once linked up over TCP, the browser issues an HTTP GET request to the web server. The web server’s response is the text HTML code of the web page. The browser interprets that page and starts to paint the screen. The page itself may contain hundreds, if not thousands, of other elements. The local browser interprets these elements, such as graphics and embedded web pieces. This in turn triggers more HTTP GETs, sometimes to different web servers and different domains.

This isn’t even taking into account any active content like JavaScript or Flash, which run like mini-programs within the browser and trigger other kinds of Internet calls . Also, I’m not talking about a secure encrypted web page, because a discussion of HTTPS, certificates, and trust would take another chapter.

A Tangled Web of Code

All of the details of this are compounded by the complexity of the code itself. One piece of software in this everyday scenario does most of the heavy lifting: the browser. A single version of Firefox has around 13 million lines of code programmed by 3,600 different developers with over 266,000 specific changes in a single version alone. Let’s not forget patching in new code changes, which happens about once every six weeks. So we have a single critical piece of the web that is so complex that the software is already beyond any single person’s understanding. It’s not the only piece of software involved, because we still need the operating system, the network stack, DNS servers, web servers, routers, and other networking software. All of these components need to cooperate with a gigantic set of standards. Each of these standards represents hundreds of hours of work by committee, detailed in inches of documentation that programmers must implement correctly in each of the respective software components. To the average user, this complexity is hidden by the interface and underlying infrastructure. Table 2-1 is a listing of some of the major standards involved in a web transaction. Each of these standards is dozens of pages of exacting technical detail.

Table 2-1. Some of the Standards Involved in a Web Transaction

Standard

Defined by

HTTP

RFC 7230

HTTPS

RFC 2660

TLS

RFC 2818

DNS

RFC 1032-1035 and many more

IP

RFC 791

UDP

RFC 768

TCP

RFC 793

CSMA/CD

RFC 2285

HTML

RFC 1866 and many more

ASCII

RFC 20

JPG

ISO/IEC 10918

PNG

RFC 2083

CSS

RFC 2318

Complexity and Vulnerability

Given the size and intricacy of a single IT system, how well do they all work in concert? Many organizations slap together IT infrastructure at the last minute to barely meet functional requirements. It is rare that you see quality or security requirements defined ahead of time. IT infrastructure grows organically as an organization expands, as systems are interconnected, upgraded, patched, modified, and jury-rigged to keep up with relentless business needs. What is left is a convoluted and bewildering infrastructure interacting with its environment in unpredictable and untestable ways. Software bugs, incompatibilities, misconfigurations, design oversights, misread standards, and obsolesce are prevalent within many IT infrastructures of notable size. These unplanned problems can create numerous opportunities for catastrophic sequences of failures that in turn give rise to gaps in security. Given the immense scale of typical IT systems, it is inevitable that pervasive security holes are the norm, not the rarity.

Researchers on both sides of the law are discovering new vulnerabilities every day. As I write this, the National Vulnerability Database is tracking nearly 75,000 known vulnerabilities in software. A single common browser plug-in had over 300 newly discovered vulnerabilities in 2015. A very popular network device manufacturer had nearly 75 new security holes in the same period. A widespread smartphone operating system had nearly 200 that year. The huge Heartbleed vulnerability of 2014 affected more than half of the Internet’s web services. Nearly two years later, people are still patching and cleaning up systems from Heartbleed. These are all disclosed vulnerabilities, which are a subset of all the other vulnerabilities yet to be discovered. The large numbers are not surprising. The more a technology is used, in terms of number of users and time in production, the more valuable a target it becomes. The largest, most popular IT systems are always going to be the ones leading the vulnerability statistics. Unfortunately, there is a reason why these systems are popular targets. These are also the same systems that your organization will be inclined to use the most because they are the most useful, the most familiar, and often the cheapest. It is like the old saying on thermodynamics: You can’t win, you can’t break even, and you can’t quit the game.

Technical Vulnerabilities

How are IT systems vulnerable to attack? To a computer, a number can be data or instructions. Most vulnerabilities occur when instructions are injected into data channels. Common attacks like buffer overflow,1 SQL injection, ping of death, the Bash bug, and cross-site scripting are all examples of this. The system is expecting user-input data in a safe format, but an attacker takes advantage of a neglected verification and inserts new commands. Since computers are blind to the real world, attackers can easily fool them into executing new orders.

Subsystems that run their own instructions within the main system amplify the vulnerabilities present. Things like active web content (Java, Flash, and ActiveX) or operating system scripting languages (PowerShell, Bash, or batch files) can become new vectors for attack. Even systems with complex parsers like HTML or SQL can also have their own entire classes of vulnerabilities on top of the software application itself. The more flexible a system becomes, especially when it comes to accepting input carelessly, the more security vulnerabilities it has.

Security technology itself is not immune to these problems. Since complex systems require complex solutions, security controls can have vulnerabilities as well. Since these controls are often the primary gatekeeper to huge treasure troves of other systems, attackers often scrutinize them very carefully. Given the complexity of software and the inevitably of software bugs, I find it a warning sign when a manufacturer boasts of impossible-to-hack devices. Another error in judgement is to assume that by making one part of a system secure, the entire system becomes secure. Called the fallacy of composition, you see it when people expect a computer to be completely safe from attack merely because it has antivirus software or a firewall installed on it. Technology is complex and vulnerabilities are so numerous that you should always expect new attacks.

Attackers Are Motivated

A few years ago, I saw a job ad on a Reddit forum. It offered a “huge salary” with full health benefits, relocation bonuses, free lunches, and two free visits to security conferences a year. It was for a small startup in Saint Petersburg, Russia. They were looking for programmers with a strong C and assembly language background, good knowledge of Windows, and networking experience. The job title was “senior malware engineer.” This wasn’t the first solicitation I’d seen from underground cyber-criminals, but it was the most brazen.

Cyber-criminals can make a lot of money from breaking into systems. It’s a safe bet to assume that they’re better paid than you are. Because they are the profit center of their organizations, they have the complete support of their leadership. In fact, many hackers are entrepreneurs, working for themselves and totally self-motivated. These individuals do nothing but work all day (and night) finding new ways to break into systems. They are highly organized, with specialists in system penetration, planting malware, managing captured machines, laundering stolen data, and reselling captured systems. They have market places that rival modern e-commerce platforms and use business tools to track “sales” and projects.

Unlike defenders, these attackers are not constrained by rules. They cheat. They steal source code and look for holes. They trick users into running their software or entering passwords on fake sites. They comb social media and build detailed dossiers on the people in your organization. Then they combine that information and come up with devious attack schemes. It’s easy to assume that there is an inexhaustible supply of vulnerabilities that attackers can tap into given enough time or motivation. The defenders are always playing catch up, always responding to attacks. In fact, cyber-criminals may have already broken into your network as you read this. This is the essence of principle of assume breach.

The Assume Breach Mindset

To not assume breach is naive. Could you assume perfect security, perfect trust, and perfect information over a long time? Just like Fort Pulaski, this thinking is dangerous. Arrogance is worse than ignorance. It is better to always be on guard, always be expecting an attack, than to be complacent and hope for the best. Security professionals must be continuously seeking to improve their defenses and looking for signs of breach. Just like the attackers, defenders should be thinking of new ways that their systems can be hacked.

This doesn’t mean be pessimistic about the industry, though many people are. Over the years, I’ve heard many pundits say that we’re losing the war against the hackers. The world hasn’t ended yet. We’re still using computers and the Internet and getting things done. We still get more value out of using computers and the Internet than we lose to attackers . I see that more systems than ever are now online and we are still protecting the majority of them. Victories or defeats are really about how you define the winning conditions. At some point, you have to let go and accept some losses. No security system is perfect and most of us are doing our very best to protect things. I know we’ll be always outnumbered, always outgunned. It is a form asymmetric warfare, where there is a vast landscape of possible targets that you cannot possibly defend all the time. That’s what makes it a challenge.

Living in Assume Breach World

What does it mean to accept the eventual failure of security ? It is best captured in Norton’s Law: All data, over time, approaches deleted, or public.2

We know that technology will fail. We know that people will fail. Failure can be defined as a system not conforming to your expectations. So change your expectations. Prepare for failure. Even if you can’t fix the weak links, you can watch them, insure them, plan to replace/upgrade them, and prepare response scenarios for when they break down.

When you do your analysis, assume you won’t know everything. Anything that you haven’t thoroughly tested should be treated as insecure. Assume that there will be unknown systems on your networks, hidden critical data, data in the wrong places, unexpected connections, and excessive user privileges. Some of the systems most relevant to security will be inconspicuous or subtle. Follow the data flows, look for the systems holding value, and rethink what is critical. Plan for systems to change constantly and what the repercussions will look like. Build your controls with all of this in mind.

Assume breach is about people as well. Remember that there will always be people in positions of trust who can hurt you. Rigid security policies will be bypassed. Listen to the complaints, they will tell you something.

Assume breach is knowing that you should never fight alone. Find peers to trust and then share information with them. Share intelligence about what you see the bad guys doing. Share data on what worked for you in keeping them out. Build allies.

Lastly, no matter what you do and how much work you put in, there will be risks left over—residual risk . Organizations exist to get things done, to make money, to solve problems—not to avoid trouble. Your job is to assume that everything will go wrong and still help your organization get through it. That is assuming the breach.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset