124Security Strategy: From Requirements to Reality
write access, and availability is about ensuring access (the CIA model). Authentication and autho-
rization control access to systems and data, whereas audit controls record access to these elements
(AAA model). e Trusted System Evaluation Criteria (TSEC) model is designed to prevent
unauthorized access, modi cation (write access), destruction (write/delete access), or denial of
access to systems and data.  erefore, the same principles used to defend castles can be applied to
in-house enclaves by leveraging advances in network bandwidth, rewall, and proxy technologies.
e following discussion presents one possible scenario for implementing limited and controlled
access points in a local (in-house) computing environment.
e local computing enclave, the other enclaves it connects to, and the associated infrastruc-
ture are areas that have a well-defi ned set of member entities and a set of access rules to de ne what
entities (people or processes) can reside in the enclave, what entities have access into the enclave,
what entities have access out of the enclave, and what accesses within the enclave are permitted. A
simple example is the Internet (although it is hard to imagine it as an enclave) where any IP entity
can be placed in the enclave, any entity can gain access into the enclave, any entity in the enclave
can gain access out, and connections within the enclave are generally not restricted.  e Internet
is like the countryside surrounding the castle: Anyone can move into the area, and they are free to
move about as they please, visiting people and villages to conduct their business. By contrast, the
castle keep was a highly restricted area where a limited number of nobles resided and access to and
from the keep was limited to a handful of trusted individuals (members of the court).
IT resources are placed into enclaves based on their value to the corporation. Although there
can be any number of enclaves within the local computing environment, four are fairly common:
core, internal, extranet, and external. Each enclave has a specifi c set of security rules that govern
internal operations and accesses from other enclaves. As in the castle, the most valuable assets are
placed in the core enclave, which is protected by a well-de ned security boundary, limited access
points (gateways), continuous monitoring, and highly restricted access. Resources in the core
enclave would include critical network and corporate services such as directory, time and name
services, messaging, network management, and backup systems, as well as major corporate data-
bases and other valuable data stores.
Enclaves are governed by a set of security rules that de ne fi ve speci c things:
1. What entities can be located in the enclave
2. How entities interact within the enclave (internal operations)
3. What external entities are allowed access into the enclave
4. What internal entities are allowed access outside the enclave
5. How these activities will be monitored
ese rules limit and control the enclave’s boundary access points. For example, in the core enclave
the only entities allowed are critical systems, maintenance and support processes, and system
administrators. Interactions are limited to:
Authentication/authorization traffi c between systems and the credential authorities (domain
controller, directory services, certifi cate services, etc.)
Domain naming (DNS), Network Time (NTP), tra c between systems and infrastructure
servers
Monitoring tra c between systems and the system management stations (Microsoft opera-
tions manager, IBM Tivoli, HP Openview, etc.)
Backup traffi c between systems and backup services
TAF-K11348-10-0301-C008.indd 124TAF-K11348-10-0301-C008.indd 124 8/18/10 3:08:40 PM8/18/10 3:08:40 PM
Layer upon Layer (Defense in Depth)125
Audit tra c between systems and audit collection services (Syslog, Audit Collection Service, etc.)
Operations and maintenance tra c between systems and their administrators
External entity access is limited to point-to-point proxy connections. All connections into the
core must originate on an authenticated system and connect to a speci c core system using speci c
protocols. For example, the PeopleSoft front end is allowed to create an open database connection
(ODBC) to the backend SQL server located in the core.  is connection must go through an
application fi rewall that only permits this point-to-point connection using ODBC protocols. Or,
as an alternative, the front end must use IPSec to connect to the backend through a fi rewall that
limits this point-to-point connection to the IPSec protocol.
All core system connections to external entities are denied unless explicitly permitted, and
these are limited to point-to-point proxy connections using speci c protocols. For example, inter-
nal DNS servers forward name resolution requests to specifi c ISP or Internet-based servers through
an application fi rewall that implements split DNS to hide internal addresses. System administra-
tors are allowed read access to external websites for support and informational purposes; these
connections must go through an HTTP proxy that authenticates the user, logs all accesses, and
prohibits any type of fi le or script transfer.
e nal piece is the monitoring requirements. For the core, all systems are equipped with integ-
rity checkers (such as Tripwire) and host-based intrusion detection/prevention systems (IDS/IPS)
confi gured to automatically alert security/support personnel when security violations are detected.
e internal network enclave would have less stringent rules. For example, within this enclave,
connections are not limited to prede ned point-to-point restrictions, but peer-to-peer connections
between desktop machines are prohibited and all server connections require IPSec authentication.
External connections into the enclave are restricted to point-to-point connections from known
entities (employees, partners, vendors, etc.) on speci c protocols but do not require application
rewalls. Outbound connections to the Internet permit read and download access to websites
through a proxy equipped with virus and malicious script scanning and detection. Monitoring
with automated alert generation is applied to external enclave connections, and centralized log-
ging is confi gured for all servers and hosts in the enclave.
is scenario provides a model that organizations can use to defi ne defense-in-depth objectives
for their particular computing requirements.  is kind of limited and controlled access would
have been di cult in the past because of bandwidth restrictions, but increases in network band-
width and appliance processing capabilities make this scenario plausible today.
Effective Logging, Detection, and Alerting Capabilities
Monitoring is one of the fi ve rules essential to good enclave governance; it is also a critical tactical
principle. You cant keep someone from attacking your systems anymore than King Edward could
keep people from attacking his castles. All you can do is limit the eff ectiveness of those attacks
with early detection and targeted responses. Monitoring is the equivalent of the castle’s high tower.
Eff ective monitoring makes it possible to detect and react to dangerous activities and attacks
before they cause any signifi cant damage.
What Constitutes Effective Monitoring?
Eff ective monitoring has three primary characteristics. First, it provides near real-time detection
and alerting; second, it is continuous; and third, it provides information with a high degree of
TAF-K11348-10-0301-C008.indd 125TAF-K11348-10-0301-C008.indd 125 8/18/10 3:08:40 PM8/18/10 3:08:40 PM
126Security Strategy: From Requirements to Reality
integrity. A monitoring system that tells youyou have been attacked” is worthless. Its like the
guard in the movie Rob Roy who runs to the shoreline and shouts threats at the attackers as they
row away across the lake.  e damage is already done. After-the-fact information may help you
understand what went wrong and make corrections to ensure it doesn’t happen again, but that is
little consolation to the business or the customers that su ered a data breach.
Castle towers provided continuous observation; soldiers were posted in them 24 hours a day.
Monitoring systems need to do the same. Hackers attack systems at night, on weekends, and holi-
days because those are the times when no one is actively monitoring those systems. A monitoring
system that does not provide continuous observation and detection is worthless. An attacker will
nd and exploit the times when the “guards” are not on duty.
Quality of information is probably the biggest challenge to effective monitoring. There
are three aspects to consider: accuracy, reliability, and relevance. Inaccurate information is
probably more damaging than no information at all because it sends people off onwild-
goose chases” rather than directing resources to the real problem. Not only must monitoring
system accuracy record and convey information, but that information must not be alter-
able. Information that can be tampered with is unreliable and requires the expenditure of
resources for validation.
Information relevance is a challenge because monitoring systems can collect huge amounts of
information, much of which is of little value. Much like the tower guard hollering, “ e villagers
are dancing in the dell!, it is interesting but hardly threatening. Enclave security rules regard-
ing monitoring must address relevance at two levels. First, what should be logged? For example,
core enclave monitors would include all failed and successful authentications, authorizations, and
accesses as well as all privileged activities. Second, what event or series of events will generate alerts
to security personnel? In other words, what activities constitute abuse, such as someone logging
in using a generic account (i.e., guest, administrator, root, etc.)? In the core enclave, both local
system event logging and centralized logging are used to maintain the integrity of the informa-
tion.  ese audit records are processed and reviewed daily.  e internal network enclave would
have similar alerting requirements, but less stringent logging and log review requirements because
the criticality of these systems and the value of the data stored on them is substantially lower than
that of core systems.
Operational Excellence for Security Controls
Alert processing and consistent periodic log reviews are part of operational excellence.
Operational excellence is a crucial component of defense in depth. More than enough good
technology is available to secure our systems, but it is only as e ective as our ability to properly
confi gure, operate, monitor, and maintain it. In fact, the more capable (i.e., complex) a piece of
technology is, the more likely it will fail if not managed properly. A great example is the fi rewall
access control list (ACL). A company Bill worked with was trying to resolve a bottleneck issue
with their fi rewalls. When fi rst installed, the fi rewalls worked great, but as time went on data
ows increased and performance decreased.  e problem—14,000+ lter entries! It seems that
the company had a reasonable process for adding ACL entries but no process for periodically
validating or removing them. Consequently, the ACLs had grown until evaluating them took
so much processing that it was impacting network performance. However, thats only half the
story; there were no permanent records of who requested the fi lter entries so you couldn’t ask if
it was still needed. Bills task was simply to optimize the list so that it could be processed faster!
TAF-K11348-10-0301-C008.indd 126TAF-K11348-10-0301-C008.indd 126 8/18/10 3:08:40 PM8/18/10 3:08:40 PM
Layer upon Layer (Defense in Depth)127
Apparently, the thousands of security holes the list created weren’t of concern. Poor operational
practices result in poor information security; excellent operations increase observation, attack
detection, and responsiveness.
Superior Personal Supervision, Training, and Skills Management
Coupled tightly with operational excellence are personnel supervision, training, and skills man-
agement. You MUST have profi cient personnel confi guring and operating your security controls.
You MUST have su cient personnel to respond to failures and attacks 24/7, and you MUST have
a command structure that can e ectively monitor and direct those resources. In recent years, the
industry has seen an interesting shift in profi ciency.  e old “hackers” are retiring and are being
replaced by a new generation of system and application operators.  e di erence between the two
is signifi cant; the hackers knew how to troubleshoot and resolve system and application problems,
whereas their replacements (with few exceptions) only know how to operate and maintain systems.
When something goes terribly wrong, external expertise is required to resolve the issue. While
this scenario may be acceptable for routine issues, it is completely unacceptable when the enclave
is under a sustained attack. You need people with the training and expertise to respond in a mea-
sured, profi cient, and e ective way. Having a well-managed training and skills tracking program
is the only way to ensure this level of expertise.
Supervision is another area that is seriously lacking in most IT organizations. Supervising
highly privileged IT personnel is more than giving directions; it is involvement in people’s lives
and the monitoring of their activities.  at’s incredibly di cult to do when you have 40 people
to supervise and half of them are on the other side of the country; which incidentally is a fairly
common scenario in today’s business environments. While distributed management might be a
sensible approach for sales and service personnel, it is utter insanity when you are talking about
highly privileged IT administrators. Supervisors need to be aware of how their administrators are
conducting themselves on the job and cognizant of circumstances that might adversely impact
job performance. Dr. Mike Gelles in his paper “Exploring the Mind of the Spy” talks about a
combination of behaviors exhibited by people who eventually go rogue. Its a surprisingly accu-
rate description of some of the rogue IT people we’ve encountered over the years. Unfortunately,
it’s rare in today’s IT world to fi nd any signi cant level of behavior monitoring, and there are
plenty of horror stories attesting to this lack. San Francisco network administrator Terry Childs
is a great example. He basically held the city’s data network hostage for over a week by refusing
to divulge the administrator passwords to his supervisors. Who was watching this guy? How on
earth did he get this much control over these resources without anyone noticing? Yes, his conduct
was completely unacceptable, but it was a lack of proper supervision and monitoring that made
it possible.
High Assurance Identity Management
An excellent operational capability must include high assurance identity management, especially
for remote/external connections. Data compromise begins with access, and access begins with
identity.  e most eff ective attack against a system is to become a legitimate user of the system,
the second most e ective is to pose as a legitimate user, and the third is to exploit a system trust.
All of these attacks give the attacker direct access to system data and resources.  is is what makes
phishing and other social engineering attacks so popular, and this is why high assurance identity
management is so important.
TAF-K11348-10-0301-C008.indd 127TAF-K11348-10-0301-C008.indd 127 8/18/10 3:08:40 PM8/18/10 3:08:40 PM
128Security Strategy: From Requirements to Reality
What Is High Assurance Identity Management?
High assurance identity begins with the vetting process for identity requests—that is, obtain-
ing assurance that the requestors are who they claim to be and have been properly authorized to
receive an identity.  e second aspect is identity authentication; the process of validating a pre-
sented identity. High assurance identity uses multiple factors such as third-party validations (e.g.,
Kerberos, Radius, PKI, etc.), tokens, and biometrics.  e third aspect is the assignment of permis-
sions to data and computing resources (i.e., authorization). High assurance identity management
will enforce the principle of least privilege. An entity (person, system, or program) can only get
access to the data and resources required for the proper execution of its duties.
Timely Incident Response and Resolution
Defense in depth is designed to absorb and progressively weaken an attack, providing the responder
with timetime to assemble and deploy the resources needed to repel an attack. Castles were
designed to facilitate rapid response to attacks. e tops of the walls and the passageways inside
the walls were wide to facilitate the quick movement of troops and equipment, and a cache of
weapons was kept at each defensive position. Because the observation towers overhung the cor-
ners of the wall, commanders could easily observe what the attackers were doing (e.g., they could
see where attackers were placing ladders against the wall) and reposition troops to counter those
eff orts. Enclaves need similar response capabilities.
e rate at which automated attacks can compromise systems and propagate themselves is
amazing and disconcerting at the same time.  e F variant of the Sobig worm spread worldwide
in less than 24 hours.  e Confi cker worm compromised 1.1 million systems in a single day and
more than 3.5 million in a week. As alarming as these propagation rates are, research shows that
the same infections within a local (in-house) computing environment would propagate even faster.
When you couple this with how quickly exploit code appears once a fl aw is known, the rapid
response capabilities become paramount. ere are a number of excellent resources on incident
response and response planning, so there is little reason to go through them here.  e main items
to focus on are the following:
Preparation Stockpile the required tools, build the required procedures, and train your
people in how to use them. Conduct drills to increase profi ciency and eliminate bottlenecks.
ere’s no time for training when you’re in the middle of an attack. Make like a Boy Scout,
be prepared!  is also means staying aware of the latest attacks and devising methods for
countering them.
Short response times Get resources working on the problem as quickly as possible. An
active worm like Con icter can compromise 12 systems a second! You cannot aff ord to delay
your response. An aerospace company Bill worked with held two days of talks, trying to
decide how to recover from a breach without killing production. By the time they decided
what to do, there wasn’t a system in the company that didn’t have exploit code on it!
Reliable communications —Ensure that all responders can be reached and have multiple
methods for information dissemination. For example, an attack that generates high levels of
network tra c makes network-based communications nearly worthless, so it is wise to have
a voice conferencing alternative.
Authority to act Empower the response team to make the hard decisions. In the case of
the aerospace company, the security team had no authority to make decisions that might in
TAF-K11348-10-0301-C008.indd 128TAF-K11348-10-0301-C008.indd 128 8/18/10 3:08:40 PM8/18/10 3:08:40 PM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset