Images

Domain 1

Access Control Systems & Methodology

The Access Control Systems and Methodology domain details the critical requirements to establish adequate and effective access control restrictions for an organization. Access control protects systems, data, physical infrastructure, and personnel in order to maintain their integrity, availability, and confidentiality.

Failure to design, develop, maintain, and enforce access control will leave an organization vulnerable to security breaches. This applies to all types of breaches, whether they are locally or remotely initiated. It is imperative that you, as a security professional, understand the types of controls available, current technologies, and the principles of access control.

TOPICS

Images   Control access to systems and data through understanding

Images   Applying access control concepts, methodology, and techniques

Images   Control techniques and policies

Images   Access control administration

Images   Identification and authentication techniques

Images   Credentialing architecture

Images   Design validation

OBJECTIVES

The security professional architect is expected to:

Images   Apply both the hard and the soft aspects of access control

Images   Understand controls provided through:

Images   Physical controls

Images   Policy

Images   Organizational structure

Images   Technical means

Images   Demonstrate an awareness of the principles of best practices in designing access controls

Introduction

The implementation of access control in most environments has not changed much over the years. Access decisions are often left to the discretion of the information owner. Access control lists are used to distinguish which users will be granted access to a directory and / or a file within the directory, and what level of permissions these users will have in relation to the data being accessed. The availability of files stored on the network is ensured through backups. This represents the last line of defense against maliciously altered or deleted files. Automation of these previously manual controls does not entirely protect data from tampering or loss. Access controls must be properly set, otherwise information may be inappropriately copied or removed from a system, often without the knowledge of the information owner. Backups must be periodically checked to make sure that all target files were archived. Human interactions with these and other elements of a system can result in losses when activities are not conducted properly.

Automation of access control is also a double-edged sword. It does have benefits such as allowing a single individual to control access to a multitude of resources. However, the access and control an individual has can also be misused. Malicious code executing in the context of users can seriously impact the confidentiality, availability, and integrity of their resources. The design and implementation of access controls within a system must consider that the controls ultimately must be useable by humans, but also provide additional measure to detect, prevent, and correct undesirable resource access.

Access Control Concepts

Security architects should be interested in access control because it has desirable attributes that can preserve the critical information found in Line of Business systems. Systems with logical access controls watch over important information. These desirable attributes of logical access controls can:

Images   Protect resources from loss or exposure

Images   Provide accountability for those accessing the information or system

Arguably, the primary purpose of access control is to protect information from losses and exposure. This is accomplished through techniques that merge the attributes of actions between users and the information within a system. Access control occurs when rules are used to control the type of access a person has to a given resource. Essentially, access control is a way to discover the following:

Who is accessing the information? (the Subject doing the accessing)

What is being accessed? (the Object(s) being accessed)

How might the access occur? (the mechanism(s) used for access)

Being able to answer these questions creates the prerequisites for basic access control functionality. These questions are associated with a particular security policy governing actions within a system. This aids in the identification of a subject’s actions on an object with respect to the permissions and rights granted according to a particular security policy. Some definitions follow:

Subject - The person, entity, or process in question.

Objects - A resource such as a file, device, or service.

Permissions - The type of access a subject is given. Common permissions include read, write, modify, delete, and execute.

Rights - Special abilities granted to a subject. For example, an administrator has the right to create accounts, while ordinary users do not. Rights have policy influences over the interactive who, what, and how questions of the access control mechanism.

The combined usage of subjects, objects, permissions, and rights forms the foundation of a system’s access control. A security architect must strive to implement access control that conforms to the security policy of a system. Figure 1.1 illustrates access control flow for subjects requesting specific objects. In the diagram, the access control system first determines if the subject has the appropriate rights to access the object. If the subject possesses the correct rights, then a subsequent check determines if the subject has sufficient permissions to view or manipulate the object.

Images

Figure 1.1 - Interaction between subjects, rights, permissions, and objects

Access control coupled with auditing establishes the basis for accountability. Auditing is the process of recording access control actions and is the principal method used to achieve accountability. Reviewing the actions of subjects on particular objects helps managers to determine if inappropriate activity occurred. Figure 1.2 shows how an access control mechanism is used to audit successful and failed access. In the diagram, Alice makes a request to read Doc_A. The Access Control mechanism compares the request from Alice with the Access Control List (ACL) for Doc_A. The ACL indicates that Alice has the permission to read the document. The Access Control mechanism creates an entry in the Audit Log and allows Alice access to the document. Bob also requests access to Doc_A. The Access Control mechanism determines through the Doc_A ACL that Bob does not have permission for the document. The Access Control mechanism creates an audit log entry of Bob’s failed request and informs him that access is denied.

Images

Figure 1.2 - Access control support for auditing

Access control is the fundamental mechanism that provides for the confidentiality, integrity, and availability of information within an information technology (IT) system. Confidentiality is supported by only allowing access to a particular object for those entities that are explicitly authorized. Granular-level stipulations on how an entity can interact with a particular object can be used to ensure the object’s integrity. The combined services of confidentiality and integrity help ensure that an object is only available to those who are authorized to access it. Access control thus enables

Images   Confidentiality: Through measures that protect objects from unauthorized disclosure

Images   Integrity: By preventing unauthorized modifications when properly implemented

Images   Availability: When integrity is properly enabled

The ability of an access control mechanism to provide the desired security services is predicated upon the correct implementation of a security policy. A system should be governed by a written standard that specifies the rules applicable to the system. These rules are derived from

Images   Laws

Images   Regulations

Images   Industry standards

Images   Organizational policies

The compilation of rules applicable to a particular IT system forms the security policy. The security policy addresses managerial, operational, and technical security requirements for a system. A system security policy can be viewed as a structure, such as the one depicted in Figure 1.3. Confidentiality, integrity, and availability establish the foundation of a security policy. Rules formed by laws, regulations, standards, and policy are the primary pillars supporting the policy. It is interesting to note that most rules exist for the purpose of establishing accountability for entry and activity within the system. The rules impart managerial, organizational, and technical controls that make up the foundation of the system security policy. More often than not, access control and auditing in an IT system represents the bulk of the technical security within the security policy. The interpretation of the correct access control implementation of the security policy is the responsibility of the security architect, based on the direction of the information owner.

Interpreting security policy is not a trivial matter. If the policy is written in terms that are too general, it might have multiple interpretations or at least give rise to implementation issues. Suppose a security policy contains the statement “Users are not allowed to access information for which they are not authorized.” Does this mean if someone shares sensitive information with them that they are now authorized? The sharing might contradict some other part of the policy. If the sharing violates another part of the policy, does this mean that the system is flawed? What if someone saves sensitive information into a directory accessible by users not authorized to view the information? If some of these users view the sensitive information, does this mean they violated the policy or that the system failed to properly enforce the policy? Indeed, these issues arise on a daily basis and are not easily defined or enforced. However difficult these issues may be, a security policy is still one of the best ways to ensure adoption and enforcement of an organization’s security requirments.

Images

Figure 1.3 - System security policy structure

The main issue with access control implementation is interpreting security policy. A security architect needs to understand some of the science behind access control in order to integrate it with the world of abstract thought representative of security requirements. The art of implementing access controls involves the merging of a logical system, which is rigid in structure, with that of an organization, which is dynamic. Security architects must anticipate these issues and make interpretations based on their understanding of the organization, intent of the security policy, and the capabilities of the system for which the access control is being considered.

The process of interpreting security policy is at its core an attempt to balance requirements against available controls. Consider Figure 1.4, which shows the balancing task that a security architect faces to achieve the ideal security policy. The correct balance between requirements and controls is neither too weak nor too restrictive. On the one side, interpretation of requirements must consider the intent of policy. A literal interpretation of policy could be insufficient to provide sufficient coverage or so pervasive as to be prohibitively expensive. The controls selected should not prevent the organization from accomplishing its mission. Rather, controls selected based on the requirements should enable the organization to continue routine operations. Finally, controls need to be usable otherwise they will likely be circumvented by system users.

Images

Figure 1.4 - Balancing Access Control

The mechanisms of access control can be found in a variety of products and at multiple levels within an IT system. The more common products implementing access control include:

Images   Network Devices - Routers, switches, and firewalls

Images   Operating Systems - Linux, Unix, and Windows

Images   Database Management Systems - Oracle, MySQL, and SQL Server

Images   Applications - Certificate authorities, encryption software, and single sign-on products

Do certification authorities implement access control? Well, sort of. It is more correct to say that certification authorities support access control. What about encryption software? Actually, when an object is encrypted, unauthorized users are denied access to the cleartext information. This has the same effect as not being allowed permission to read an object. In reality, single sign-on products do not implement access control either. However, they do facilitate the integration of disjointed access control systems. This reduces the user and management headaches often experienced when switching between systems or components with different access control mechanisms.

Access control at the network level tends to be more connection oriented, such as allowing or disallowing ports and protocols associated with given IP addresses. Typically, an access control mechanism is limited to the box itself. Most operating systems provide some form of access control by default. At this level, an access control mechanism is sometimes shared among workstations and servers in a network. However, this is not always the case for every commercial operating system. Many databases provide capabilities to control access to the data they contain. In some cases, a database management system might also be able to distribute access control functionality among distributed systems, in much the same way as some operating systems.

Some applications contain their own access control mechanisms, which might be as simple as allowing or denying access based on presenting acceptable credentials or a robust access control mechanism similar to those found in an operating system or database. Unfortunately, these various access control mechanisms, more often than not, are proprietary and do not integrate easily with one another. This situation complicates the job of the security architect. Furthermore, the variety of access control mechanisms also compounds the problem of trying to determine at which layer or level in the network to enforce the security policy. For instance, if a security policy prohibits the use of unauthorized protocols, should this be enforced at the operating system only or should it also include all routers and switches? The security architect will need to make judgments about the depth and breadth of access control implementations to meet the security policy. Figure 1.5 illustrates many different access control methods and techniques that can be applied to the various layers of the Open System Interconnectivity (OSI) model.

A variety of access control techniques and policy mechanisms exist. It is interesting to note that many of the access control techniques are designed to implement a particular type of policy. This means that a particular access control technique might not be the best choice in a given circumstance. Unfortunately, Commercial-Off-the-Shelf (COTS) products do not typically provide a method of selecting an access control technique that would be ideally suited for the system, or the particular environment in question. As a result, the security architect is often left to their own devices to make a particular type of access control mechanism fit as best it can when a more desirable mechanism is not available.

Images

Figure 1.5 - Access control within the OSI model

Two important features of access control mechanisms are the Access Control List (ACL) and the ACL repository. The ACL identifies the security attributes of a particular system object. Typically, this will include information about the object owner and other entities having authorized access associated with the rights granted to each entity. Each subject identified in an ACL is known as an Access Control Entry (ACE). The ACL repository is used to manage each ACL in the system. Figure 1.6 illustrates these access control features.

Discretionary Access Control

Discretionary Access Control (DAC) is the predominant access control technique in use today. Most commodity systems implement some form of DAC. The underlying concept of DAC is to give an object owner the discretion to decide who is authorized access to an object and to what extent. In this regard, the policy is set or controlled by the owner of a particular object. At its most basic level, DAC implementations include the specific rights to read, append, modify, delete, and execute. The read permission allows a designated entity the ability to load the contents of an object into memory. The permission to append allows an entity to attach new information to the end of an object. Permission to modify an object means an entity can change any and all of the contents of an object. Entities with the permission to delete can destroy an object and cause it to be removed from volatile or nonvolatile memory. The execute permission gives an entity the ability to cause the system to create a new process or thread of execution based on the binary nature of the object.

Images

Figure 1.6 - ACL database

Implementing DAC can give rise to security problems if the mechanism is not well understood or if it is used contrary to its design. There are three important aspects of DAC that must be understood and questioned by the security architect:

Images   Read - What does it mean?

Images   Write - What are the implications?

Images   Execute - What is running on the box?

Although these aspects are essential elements of DAC, they can reveal apparent flaws in the access control mechanism.

It is essential to understand that the read permission does not mean read-only. It really means read-and-copy. Any subject with the permission to read a file can also make a copy of the same file. This frequently occurs during the reading process. When an application reads a data file, it makes a copy of the contents in memory. Because the user of the application owns the memory space where it is copied, the application user essentially creates a new copy of the data file. Thus, the entity provided with the permission to read a particular data file is now the owner of a copy of the same file. This functionality of DAC potentially empowers an attacker bent on information theft. Thus, read permission may complicate insider threat mitigation efforts.

The difficulty of limiting access to information is shown in Figure 1.7. In this illustration, Bob has permission to read Doc_A. Bob’s ACE only allows him to read the document from the disk device that is owned by the system. When Bob reads the document, a copy of it is created on his local system in an area of memory over which he has full control. Bob is now the owner of the new document copy. At this point Bob can do what he wants with any portion of document. He could make copies of any section and send it to an output where he has the appropriate permission.

Images

Figure 1.7 - Read permission challenge

The next important aspect that must be considered is permission to write. Giving an entity the ability to write to a file object allows it to write anything to that object. “Anything” could include a virus, appended to the end. Another problem with this permission is that an entity could also replace all of the data in a file with one byte of information. Suppose a document file includes charts, graphs, and over a megabyte of text. The malicious entity could simply perform an overwrite, causing a multimegabyte file to shrink to one byte in a microsecond. This is similar to having the ability to delete a file. The implication of the permission to write is that object integrity can be affected. Inappropriate granting of the write permission has far-reaching and potentially devastating consequences.

Yet another troubling issue with the ability to write is when the object in question is a directory. Writing to a directory essentially means a subject is allowed to create a new object. In this case, entities with the ability to write to a directory can create any type of object in that directory, with ownership permissions on the newly created object. Consider the situation in Figure 1.8. In this example, Bob can create an exact copy of Doc_A in the same directory by using a new unique name, in this case Doc_D. As the new owner of the object, Bob could grant others access to the document, which may violate organizational policy.

A malicious entity may choose to write objects that are binary executables to any permitted directory, according to the subject’s write permission. Even though an entity might not be given execute permissions for the directory object, this is not a hindrance because the object owner can change any inherited permissions on the new object from the directory to anything he or she likes.

Images

Figure 1.8 - Write permission problem

The third significant DAC consideration is the execution permission. More specifically, it is essential to understand the concept of the context of an executing process. When an entity executes a process, that process typically has access to all objects available to the entity. A program executed by the user has access to all files, interfaces, and other programs running in their context. Typically, an operating system does provide some memory separation between running processes. However, interprocess communications as well as graphical interfaces open avenues for one process to affect the execution of another. This feature of DAC is what gives a Trojan horse the ability to steal information or damage a system. This problem with DAC is well known and studied. Although researchers have proposed a variety of solutions to this problem, their solutions have not seen widespread commercial adoption.

Although the aforementioned problems seem to be flaws in the design of DAC, they are more essentially issues that arise due to implementation errors of the access control mechanism. The security architect should take proactive measures to overcome the shortcomings of DAC, which will reduce any risk that might be present. Some strategies for implementing DAC follow.

DAC Implementation Strategies

Overcoming the challenges of the read permission is a difficult task, but mitigations are possible. The efforts of a security architect will require technical and nontechnical techniques. The following are some approaches to consider:

Images   Limit access to essential objects only. Ensure user access to resources is restricted to only that which is needed for the performance of their duties. When we think of the concept of Least Privilege and combine it with the concept of “need to know“, we have an effective mechanism for implementation from an architectural perspective.

Images   Label sensitive data. Use standardized file and folder-naming conventions, as well as headers and footers within documents that provide users with visual clues about the sensitivity of the information. Extended attributes of a file also provide an area to include label information of a file. For instance, the Summary Properties of a file within Windows include editable values such as Title, Subject, Category, Keywords, and Comments that can be used to store label information.

Images   Filter information where possible. Use filters to detect inappropriate transfers of sensitive information.

Images   Promulgate guidance that prohibits unauthorized duplication of information. Policies and procedures should inform users about the types of information that require protection, limited distribution, or replication. Provide users with periodic training and guidance reminders.

Images   Conduct monitoring for noncompliance. Use tools to search files containing sensitive information or labels in repositories where they should not exist. The Windows Explorer Search tool and the grep utility in Unix are useful for finding particular words or phrases within a file.

Controlling write actions is essential to protect information and system integrity. Preventing unauthorized modification of resources is the primary means of controlling system integrity. Why do viruses often gain complete access to a system? Excessive permissions on configuration settings and files allow the virus to write to or delete critical files. Implementing the most restrictive permissions, such as read or no access, would prevent a virus from writing to important objects. Consider the access control list displayed in Figure 1.9 for explorer.exe.

Images

Figure 1.9 - Commonly recommended permissions for explorer.exe

This is a common security setting in many Windows systems. In fact, it is a recommended setting found in various security configuration publications. Is there a problem with this configuration? The trouble with this ACL is that the integrity of the file is not protected in the context of the System or any user in the Administrators group. Malicious processes executing in either context could either attach a virus to explorer.exe or associate a backdoor with the program. Furthermore, the file could also be deleted altogether. The ACL should specify the Read permission for System and Administrators as well. So why would many publications recommend an ACL such as this? It is believed that the reason for this ACL recommendation is because it allows updates to binaries to be made on the fly. It supports the Windows Update process and simplifies life for administrators too. The weak ACL eases file level administration. This is a prime example of a trade-off between security and simplicity. It is challenging to meet both these goals simultaneously because there are so many executables in the average Windows based system. Does an alternative exist? Instead of living with the weak ACL, it is desirable that an automated process be instituted as a supportive measure. Assume that a file system is initially configured with a strong ACL (read permission for all subjects). Microsoft provides both the cacls.exe and icacls.exe tools, which can be scripted to handle ACL and ACE changes for any file. The cacls tool is officially listed as deprecated by Microsoft, but is still available in Windows 7 and Windows 8, as well as Server 2008/R2 and Server 2012. Microsoft also provides support for the same functionality through the use of PowerShell, although the knowledge required to use PowerShell for this task will be an order of magnitude higher than either cacls or icacls, as one must rely on Get-Acl and Set-Acl cmdlets to get, show and set permissions on a folder. However, since there are no cmdlets to help with the actual manipulation of the permissions, the security professional will need to use a few .NET classes and methods to get the job done.

With cacls, just before conducting a system update, run the following from the command line:

        C:cacls *.exe /t /e /p administrators:f system:f

This command instructs cacls.exe to edit the ACL for each executable in the C: drive, including subdirectories, and change the ACE for the Administrators group and the System account to full control. This will allow changes to any executable file on the C: drive. When the system update is complete, execute the following at the command line:

        C:cacls *.exe /t /e /p administrators:r system:r

This edits the ACL for each program on the drive and reduces the Administrators group and the System account to read, which restores the most appropriate settings. Consider using these commands on other important binary files such as ActiveX controls, libraries, drivers, screensavers, and applets. The primary caveat is that ACL changes must be coordinated with updates, whether via Microsoft Update or some other tool. It is important to note that this example of cacls.exe use disregards inheritable directory permissions. However, this tool does give the capability to accommodate those permission settings too. As with any new security management technique, it is important that testing be conducted first to ensure that the most appropriate configuration is achieved. The equivalent solution using icacls is below:

Step 1: (This command instructs icacls.exe to edit the ACL for each executable in the C: drive, including subdirectories, and change the ACE for the Administrators group and the System account to full control.)

C:icacls *.exe /grant administrators:F system:F

Step 2: (This edits the ACL for each program on the drive and reduces the Administrators group and the System account to read, which restores the most appropriate settings.)

C:icacls *.exe /grant administrators:R system:R

To illustrate how this would be done with PowerShell, an object, Testfolder, will be created. Permissions will then be assigned to the object, and finally, the permissions that have been assigned will be verified by listing the ACL. (The numbers denote a separate line of PowerShell commands, or code, and are in no way part of the example, and should not be entered when using PowerShell to create objects and assign permissions to them)

  1. New-Item G:Testfolder –Type Directory
  2. Get-Acl G:Testfolder  |  Format-List
  3. $acl  =  Get-Acl G:Testfolder
  4. $acl.SetAccessRuleProtection($True, $False)
  5. $rule  =  New-Object  System.Security.AccessControl.eSystemAccessRule(“Administrators”,”FullControl”, “ContainerInherit, ObjectInherit”, “None”, “Allow”)
  6. $acl.AddAccessRule($rule)
  7. $rule  =  New-Object  System.Security.AccessControl.FileSystemAccessRule(“Users”,”Read”, “ContainerInherit, ObjectInherit”, “None”, “Allow”)
  8. $acl.AddAccessRule($rule)
  9. Set-Acl G:Testfolder $acl
  10. Get-Acl G:Testfolder  |  Format-List

Encouraging users to make use of access control mechanisms can help limit loss of data due to malicious code. Owners of shared files, such as word processing documents, should set the ACE for all other subjects to read-only. Applying this philosophy will prevent other users from accidentally (or unwittingly, when compromised by malware) deleting a document shared by someone else.

Appropriate access control settings need to be established for all resources within a system. The following are a few more objects for which write permission should be restricted:

Images   Configuration Files - Any file used to store configuration information should be set to read-only where updates are not routine actions.

Images   Windows Registry - Lock down system registry keys to read-only. This is especially important for the Run keys, which designate software to execute at boot. Run keys are a primary target for malware.

Images   Services - If a service is not needed, it should be disabled. If particular users do not need a service, then they should be prevented through the ACL from interacting with it.

Images   Data - Follow the concepts of least privilege and separation of duties when assigning permissions to data files.

Solving the execute problem with DAC is by far the most important measure. The execution of unauthorized processes can threaten the integrity of user context, or worse, that of the entire system when the context is that of an administrator or the system itself. The best approach to this problem is to apply restrictive access controls to existing executables and monitor for unauthorized access instances. The following is a list of recommended approaches:

Images   Set access control entries for all executable binary files to read-only. This helps to protect their integrity. Examples of Windows-based executables, libraries, and other specialized binaries that should be set to read-only include those with the following extensions: com, exe, scr, dll, tlb, ocx, drv, sys, vxd, cpl, hlp, and msc.

Images   Prevent execution from removable media. Prevent applications from automatically running from removable media.

Images   Use host-based firewalls. Set the policy to explicitly identify which programs are allowed to connect to the network. Ensure that the associated policy file or registry entries are set to read-only using DAC.

Images   Conduct software integrity inventories. Use tools such as Tripwire (open source for Linux based systems, or Enterprise for Windows based systems), Cimtrak or splunk to identify unauthorized changes in programs, and validate the integrity of those that are authorized.1

Images   Monitor executions. Consider implementing specialized tools that can keep an audit of software running on a machine. This can be used to identify users violating policy or malware not detected by antivirus monitoring.

Nondiscretionary Access Control

Access control mechanisms that are neither DAC nor mandatory access control (MAC) are referred to as forms of nondiscretionary access control. These types of access control still rely on the fundamental concept of subjects, objects, and permissions. However, the association or specifications of these elements are different from DAC and MAC.

Images   Role-Based Access Control (RBAC) - It is desirable to limit individuals to only those resources that are needed to support their duties. Ideally, an access control mechanism would be able to ensure that the assignment of privileges to a particular resource does not introduce a conflict of interest or issue with separation of duties. This can be achieved when user access is controlled according to assigned job function or role. RBAC is a specialized access control mechanism providing this capability. The unique quality of RBAC is that rights and permissions are ordered in a hierarchal manner. Privileges on resources are mapped to job functions. This prevents an object from being shared with those not authorized.

Images   Originator Controlled (ORCON) - An information owner may desire to control the life cycle of certain types of information. Some of the desired control might concern how long the information remains available or who is allowed to view it. The United States military makes use of ORCON2 designations in paper documents that direct readers not to disseminate the information without the express consent of the originator (McCollum, Messing, Notargiacomo, 1990).3

Images   Digital Rights Management (DRM) - Intellectual content such as music, movies, and books need methods to control who is authorized to access these types of content. Clearly, DAC is of no use in controlling the unauthorized distribution of these types of information. Additionally, DRM must have portability features because a user might want to access the protected content from different systems or platforms. DRM relies on cryptographic techniques to preserve the authenticity and access to protected information. Researchers are also investigating the use of multilevel security policies to improve DRM resistance to attackers (Popescu, Crispo, and Tanenbaum, 2004).

Images   Usage Controlled (UCON) - Another problem associated with protecting intellectual content involves frequency of access. Suppose a video store desires to rent access to its movies, but wants to limit the ability of consumers to only view each rental a maximum of three times. DRM techniques provide measures that attempt to control who can access the content, but they do not control how often. Addressing the “how often” issue will establish the ability to license the frequency of access to protected content. UCON4 is one technique currently being investigated by researchers to control the frequency of access to protected content (Park and Sandhu, 2004).

Images   Rule-Based Access Control - A number of different devices and applications provide their own type of access control mechanism in which decisions are made based on some predetermined criteria. These mechanisms use rules to decide if an action is permitted or denied. Rule-based access control is extensively used, but it is not scaleable. Firewalls, routers, virtual private network (VPN) devices, and switches are examples of products using rule-based access control.

Authentication is an important aspect of rule-based mechanisms. Subjects of a rule-based mechanism are usually identified as human users or system activity. People authenticated with rule-based access control often use passwords or cryptographic proofs such as a digital certificate. Authentication of system activity relies on operational aspects such as media and network addresses as well as cryptographic proofs. For example, a firewall enforces authentication decisions based on network addresses.

Rules developed for this type of access control can be complex. DAC makes determinations based on access control lists as opposed to rule-based access control, which evaluates activity. For example, a firewall may evaluate a network connection based on the address, port, and protocol used. This is a much more complicated evaluation than evaluating a subject’s access to a particular object in DAC.

Permissions in rule-based access control are simplistic binary decisions. Either access is allowed or it is not. If the rule is met, then the action is allowed. This is in contrast to DAC, where a degree of access can be permitted, for example, read, write, or modify.

It is important to note that MAC is sometimes referred to as a rule-based access control (Bishop, 2003). It is differentiated within this text because the implementation and functionality of MAC contrasts significantly with other devices and applications that employ rule-based mechanisms.

Mandatory Access Control (MAC)

An organization may have many different types of information that need to be protected from disclosure. Many organizations around the world, both public and private, have employed a variety of mechanisms with regards to Mandatory Access Control solutions. Secure operating systems designed specifically with access control in mind, such as SELinux, have been examined and used as test beds for development of MAC solutions in many countries such as the United Kingdom.5 Starting with Microsoft’s Vista operating system, the use of Mandatory Integrity Control (MIC), which is a core security feature that adds Integrity Levels (IL)-based isolation to processes running in a login session has been used as another form of MAC. This mechanism ‘s goal is to selectively restrict the access permissions of certain programs or software components in contexts that are considered to be potentially less trustworthy, compared with other contexts running under the same user account that are more trusted.6 The advent of Electronic Health Records, and the issues associated with the management of electronic patient data more broadly within the healthcare industry globally has become a key focus area for research and development of MAC models and systems to address the key concerns of Confidentiality and Integrity. Countries such as Australia have active research communities attempting to address issues in this area.7

The United States government has identified three particular classes of information that require integrity and unauthorized disclosure countermeasures due to the level of harm that their exposure might cause to the country. These classes in ascending hierarchal sensitivity are Confidential, Secret, and Top Secret. All other types of information are considered unclassified and must be protected according to other policies. Suppose an individual is granted a clearance to access Secret information. Such a person could view any Secret information, as well as Confidential information, when he has an appropriate need to know. However, he would not be allowed to access Top Secret information, because his clearance is lower than the sensitivity designation of the classified information. MAC was devised to support the concept of access based on a subject’s clearance and the sensitivity of the information. The fundamental principles of MAC prevent a subject from reading up and writing down between classifications (Bertino and Sandhu, 2005).

MAC functions by associating a subject’s clearance level with the sensitivity level of the target object. It is important to note that systems supporting MAC implement DAC as well. The key to the proper functioning of MAC is the use of specialized labels on each system object. The label specifies the highest classification for the particular object. The system protects the labels from alteration. A subject must have a clearance equal to or greater than the sensitivity of the target object. Furthermore, the system relies on DAC methods to determine if the subject has been granted the permission to interact with the object. Figure 1.10 illustrates the association between clearance, sensitivity levels, and the use of DAC. In the figure, Alice interacts with the system at the Secret level. Although she is the owner of a document within each sensitivity level, MAC prevents reading to higher levels and writing to lower. This aspect of MAC prevents the flow of information from a higher classification to one that is lower.

Images

Figure 1.10 - Classifications and sensitivity levels in MAC

The important feature of MAC is its ability to control information flows. Subjects with higher clearances are permitted to read information at a lower classification level. Thus, a subject with a Secret clearance is permitted to access information at the Confidential level. However, MAC prevents a subject with a Secret clearance from writing information to an object at the Confidential level. This restriction of writing to a lower classification level prevents the accidental or intentional flow of information from a higher classification to a lower classification. This helps protect against policy violations or unauthorized exposures of the information to those without a need to know. Obviously, a subject with a lower clearance cannot read objects with a higher classification, but they are permitted to write information to a higher classification. This has the somewhat interesting consequence that a user can write information up, but will have no way to reread what was written. Figure 1.11 is another way to view the read and write properties of MAC regarding clearance and sensitivity.

Images

Figure 1.11 - Sensitivity, Classification, reading, and writing matrix of MAC

Suppose a manufacturing organization desires to use MAC as a way to protect its proprietary products. Manufacturing of proprietary products involves many people along the way, but not everyone needs to be aware of every step. Separate classifications are established based on the life cycle of a product based on materials used, fabrication techniques, and experimental improvements. Given these scenarios, three classifications for our hypothetical manufacturing company are proposed:

Development - Those involved in research and development of new products

Processing - Individuals involved in the proprietary assembly of the components

Components - Ordering and storage of the base components

Figure 1.12 describes the interaction between the classification levels and product life-cycle activity. In this example, subjects who order the base components have no idea about quantities used to create a particular product, because they write the quantity from their components classification to the processing classification. Similarly, they have no comprehension of the processing and development activities the components are involved in. Those involved in processing consume select components to create different products depending on the particular process used. Processing is prevented from reading the quantities used for development. The output from processing is not accessible by those with a components clearance, but it is available to those with the development clearance. Workers in the development arena create experimental processes and consume various components to devise new products. The experimental output in the development classification is not accessible by those with a processing or components clearance.

Images

Figure 1.12 - Use of MAC in a business environment

Least Privilege

Consider an organization that does not restrict access to personnel records. Given a lack of access control, anyone browsing through the records could obtain privacy-protected information or information on compensation. What if compensation information is limited to only the people in the finance department? This is better, but is insufficient. Should the people who receive payments, also known as accounts receivable, from customers also have access to an individual’s salary information? This is obviously unnecessary as the job function of people working in accounts receivable has nothing to do with payroll. As such, one would expect an appropriate use of the concept of least privilege to be implemented, preventing accounts receivable personnel from browsing payroll information. Limiting access to sensitive information is the crux of information security.

Implementing least privilege implies that an individual has access to only those resources that are absolutely necessary for the performance of their duties. In practice however, implementation is often fraught with difficulties. More commonly, the access granted is typical of what is seen in Figure 1.13. It is evident from the figure that the user is granted rights and permissions beyond what is needed to conduct his tasks. Why does this occur? Some of the reasons for this could be:

Images

Figure 1.13 - Common misallocation of privileges

Images   Lack of explicit definition of duties - Neither the user nor manager has a clear grasp or definition of the duties assigned to the individual.

Images   Weak internal controls - Where explicit duties are known, changes in duties or access controls on the system may not be periodically reviewed for conflicts.

Images   Complexities in administration - In very large, distributed organizations, it is difficult to know the access limitations that should be imposed when access control is centralized.

Thus far, the complexities of least privilege, from the aspect of information access, have been considered, but are there considerations that extend beyond information? Assume that an organization has a centralized database containing sensitive information. Does it seem reasonable that each database user is allowed to have database administrator rights? This scenario would most likely violate least privilege because users would have the ability to easily access information outside of the scope of their duties. If the duty of a database user does not include administration functions, then following the concept of least privilege, a user would not be provided database administrative rights.

Another factor to be considered is access to system resources. Should a user be given access to all network resources? Suppose the sales team, as shown in Figure 1.14, has wireless access into the network. Assume that they require wireless access because they rely heavily on their laptops as their primary computing platform. The finance department is contemplating whether to connect its new laptops to the network using wired or wireless access. Does it seem reasonable that users in the finance department should also be allowed the ability to connect to the wireless access point? It is doubtful that the individuals in the finance department have a business need to connect to the wireless access point. Given this perspective, preventing their ability to connect to the wireless access point and reverting to wired access would follow the concept of least privilege.

Images

Figure 1.14 - Least privilege dilemma in network resources

Least Functionality: A Cousin of Least Privilege

Limiting access to resources is fairly straightforward. If an individual needs to run a tool, then access to the tool should be allowed. However, some tools contain functionality that can be damaging. In some cases, an acceptance of risk is necessary given the need to use the tool. When possible, the functionality of a tool should be reduced to mitigate actual or potential risk.

Consider the issue of phishing. A user receives an e-mail from what appears to be his or her personal banking institution. The message claims that the account password must be updated or access to the account will be lost. The “phishy” part is a hyperlink to the individual’s bank. Protecting unsuspecting or uninitiated users from these types of attacks can be accomplished through simplistic measures. Many e-mail clients provide the capability to display plaintext information, excluding the rich or hypertext content. Configuring the e-mail client to display only plaintext exemplifies the application of least functionality. The sacrifice is a loss of aesthetically pleasing e-mails, but the gains in preventing individuals from unwittingly facilitating identify theft are substantial.

A security architect must consider design issues related to least privilege. Ideally, a system will provide technical methods for enforcing least privilege. The use of access control mechanisms will be the architect’s primary method to design in the ability to implement least privilege. Consider the following techniques to implement least privilege:

Images   Access Control Lists - Use network- and system-based access controls to allow or deny access to network resources.

Images   Encryption - Using cryptographic measures for network traffic prevents surreptitious gathering of information that a user is not authorized to access.

Nontechnical measures should also be implemented to support least privilege. Although an architect may not be responsible for these aspects, they should be promoted and considered nonetheless:

Images   Define data and associated roles - Sensitive data requiring least privilege implementations should be identified. Information owners should explicitly identify the individuals or roles that are authorized access to the information.

Images   Sensitive data-handling procedures - Rules on the use of data exiting the system should be explicitly identified. Data is commonly moved from systems through removable media, electronic sharing, and printing. Policies should establish acceptable circumstances and methods of transferring sensitive data. Ideally, procedures will also explicitly identify the best methods to securely transfer information.

Images   User education - All users handling sensitive media should be made aware of the proper methods used to handle the information. They should be instructed on how to identify transfers of information that might violate least privilege. Obviously, the necessary tools used to facilitate secure transfer should be available for their use.

A security architect following the concept of least privilege designs a system that limits resources to only those subjects who require access in the performance of their duties. Resources of concern include files, devices, and services. Access control and encryption are two techniques that can be used to enforce least privilege. User role identification, data sensitivity, and user education are important factors supporting least privilege implementations.

Separation of Duties

One way to enforce accountability is through techniques that prevent a single individual from circumventing internal controls. Suppose Mr. Smith is allowed to make purchases for Rockit Enterprises. He orders 153 ACME widgets. Assume Mr. Smith is allowed to sign for the receipt of the products. Upon receipt, he modifies the paperwork to reflect an order and receipt of 100 ACME widgets. As a result of the modification, nobody is aware of the 53 widgets pocketed by Mr. Smith. This situation occurred due to a lack of separation of duty. Conceptually, separations of duties preclude an individual from perpetrating fraud. With sufficient separation, a single person should not have the ability to violate policy without colluding with at least one other individual. From a system security standpoint, separation of duty is necessary to ensure accountability for actions taken in the system. Weaknesses in separation of duties are identifiable by the excessive assignment of system privileges, which could allow an individual to perpetrate fraud and avoid detection.

Every person within an organization has at least one role. Ideally, all roles within an organization are discrete, which means that every role is unique. It is also desirable for every individual to be assigned to only one role. However, this is seldom the case, especially when there are staffing shortages. It is also common to see multiple people in the same role. The goal of separation of duties is to prevent the perpetration of fraud. To meet the goal of separation of duties requires confidence that the rights and permissions assigned to subjects are correct, without conflicting overlaps between roles.

Images

Figure 1.15 - Assembly line work

A role is nothing more than a job function within the organization. The duties assigned to individuals define their role. Ideally, they are assigned tasks that do not involve a separation of duties issue. An individual’s role fits into the patterns of activity supporting the mission of the organization. Some of the patterns will involve workflows consisting of tasks that are linked together. An example of a workflow is the assembly line seen in Figure 1.15. Each station on the line adds components or makes modifications until a final product emerges at the end. The tasks and duties of each worker are naturally segregated by the workflow process of the assembly line. Thinking in these terms regarding security in information systems, the security architect can find ways to build segregation of duties into organizations as well.

Consider the work of an information systems auditor. A high-level overview of the job role entails

  1. Enumeration of system requirements

  2. System assessment against requirements

  3. Report of findings and recommendations

Think about what the reporting aspect of the role might involve. After assessing a system, auditors must compile a document for perusal by the management staff. A document workflow of this process is presented in Figure 1.16. Some organizations use tools that facilitate or even define the type of process seen in the figure.

Images

Figure 1.16 - Document workflow

The diagram depicts the auditor generating a report that is intended for review by a supervisor. The document is sent to a technical editor, who makes final adjustments for style and grammar. The final document is then delivered to the customer. This simplistic workflow involves three people with a single document. There are three very different roles - that of an auditor, supervisor, and technical editor - involved in the workflow. All three persons have access to the same data, but their duties are quite different. Therefore, following the paradigm of workflows in a system is helpful when defining individual user roles.

A system does not define the role of a person. Even when workflow tools are used, it is important to realize that people often have other tasks outside of a system. So how can job duties or roles be used to identify separation of duties within a system? This can be accomplished by establishing the necessary separation between the combined rights and permissions granted to subjects. In this regard, a role is composed of the following:

Images   Rights - The type of actions an individual is permitted to make in the system

Images   Permissions - Access granted to information shared or used in job-related workflows

The discussion thus far suggests that a role is the aggregate of a subject’s rights and permissions. Figure 1.17 presents an ideal situation, in which subject roles work together, but do not overlap. This example epitomizes the perfect separation of duties situation.

Reality is usually less than ideal however. In practice, it can be difficult to determine the exact rights and permissions an individual needs for the job. Ensuring separation of duties becomes a more difficult issue when people are assigned multiple roles, and as a result, when there is greater potential for conflict. Figure 1.18 shows us the reality of separation of duties. The overlap between roles indicates that one role has excessive rights or permissions compared to those of another.

Images

Figure 1.17 - Ideal role assignments

Images

Figure 1.18 - The challenge of role assignments in reality

A broad observation can be made regarding the relation between rights and permissions. Those assigned elevated rights in a system should have less permission to data involved in workflows. This is necessary because those with elevated rights can potentially compromise confidentiality, integrity, and availability of workflows. Having this ability could enable an individual to perpetrate a fraud. Consider a scenario with the following roles:

Images   Developer - Individuals within this role are responsible for developing or maintaining applications for organizational users. Their creativity enhances the usefulness of an information system. They should not have access to the production system.

Images   Administrator - This role ensures the system is available for organizational use. Administrators are commonly tasked with managing user accounts and resources. Their duties do not ordinarily require access to or manipulation of sensitive organizational information. However, their powers of access into the system are such that they can be fairly characterized as having access to all system resources and data.

Images   Security Officer - Monitors systems for misuse. Individuals in this role commonly review audit logs and the output from security tools such as those used for intrusion detection and vulnerability assessments. They are the information assurance sentinels tasked with protecting a system’s security services.

Images   Ordinary User - It is most likely that this role will span a plethora of subordinate roles where one type of user is separated from another. Users are the primary consumers and manipulators of organizational data. An information system exists primarily to connect ordinary users with the data they need to perform their assigned duties.

Assume that these roles are mutually exclusive; that is, a user in one role should not have the ability to perform the functions of another role. This represents an ideal implementation of separation of duties. Now, suppose that weaknesses in the assignment of separation of duties occur, as shown in the following tables.

As a general rule, developers should not have access to a production system. When this rule is followed, separation of duties is not an issue. However, when the conflicting roles shown in Table 1.1 are coupled with a disregard of the aforementioned rule, problems can occur. If a developer is assigned conflicting roles, a situation is created in which data theft or system disruption becomes possible, and it will be difficult to track violations.

Conflicting Role

Potential Violation

Administrator

Can circumvent most, if not all, security controls through software changes such as the deployment of a malicious driver acting as a root kit.

Security Officer

Security events generated by malicious software deployed would be ignored. Developers in this capacity would be checking their own work and could easily cover up a fraud.

Ordinary User

With this level of access, a developer could create a covert channel allowing access to unauthorized information.

Table 1.1 - Inappropriate Roles Assigned to a Developer

By default, an administrator has the ability to access the most sensitive information within a system. A corrupt administrator has the potential to cause significant damage or expose a substantial amount of data. The substantial degree of access associated with this role necessitates a commensurate degree of monitoring. However, when this role is inappropriately merged with other roles seen in Table 1.2, it becomes possible for administrators to avoid critical monitoring.

Conflicting Role

Potential Violation

Developer

This has the same effect as a developer given administrator rights. It is absolutely the worst possible violation of the principle of separation of duties.

Security Officer

Administrators with this role would be checking their own work and could easily cover up security violations generated through monitoring activity.

Ordinary User

If granted ordinary user access, administrators might be able to bypass monitoring activity specifically designed to identify administrator abuse.

Table 1.2 - Inappropriate Roles Assigned to an Administrator

Security officers are concerned with monitoring and enforcement of a system security policy. In this regard, there should be little or no interaction with data workflows in the system outside the scope of their duties. Providing a security officer with additional abilities shown in Table 1.3 would make it difficult to detect their fraudulent activity. Corrupt security officers given excessive privileges would most likely be able to hide their activity.

Conflicting Role

Potential Violation

Developer

Access violations due to malicious software developed by the security officer would be covered up.

Administrator

Changes in accounts or system policies might go unnoticed, and most likely unreported.

Ordinary User

Attempts to access unauthorized information would not be reported.

Table 1.3 - Inappropriate Roles Assigned to the Security Officer

The focus of the role of an ordinary user is to facilitate a job function as a workflow. The primary tasks involve the creation, manipulation, and consumption of system data. The assignment of conflicting roles as seen in Table 1.4 would enable ordinary users to either perpetrate a fraud or prevent their malicious activities from being detected.

Conflicting Role

Potential Violation

Developer

A user could craft specialized code allowing backdoor access into the context of other system users.

Administrator

User could change workflow data with this level of privileged access.

Security Officer

Violations made by the user could be covered up or disregarded.

Table 1.4 - Inappropriate Roles Assigned to an Ordinary User

The previous scenario suggests an intuitive method of separating duties for individuals accessing a system. An initial separation determination based on the job function of an individual is possible with respect to the system. An individual with management or operational influence on the system should have reduced interaction with system data. In this regard, there are two broad categories that can be used as the initial basis for separation of duties:

Images   Individuals responsible for operational aspects of the system - Within this area, duties are further decomposed according to their functions which might enable circumvention of accountability.

Images   Those primarily interacting with system data and information workflows - Users are separated according to the type of data they interact with.

Security design efforts should consider various aspects that could affect separation of duties. At a minimum, it should be possible to enforce separation of duties through the user access control mechanisms whether the designation is manual or automated. A system should have sufficient administrative flexibility to accommodate the following aspects:

Images   Identify each explicit role - Decompose all system use and management functions using a process similar to that shown in Figure 1.19. From there, examine each function and consider if further decomposition is needed to ensure appropriate separations to support workflow processes. Collect users into groups or roles according to the access control supported by the system. Attention should be given to disjoint access control. This occurs when multiple access control systems exist that do not share information. It is necessary that tracking of user grouping between the various access control systems be coordinated to avoid separation of duty issues. Document the process used to arrive at the segregations for repeatability purposes. The results of the segregations should also be documented.

Images

Figure 1.19 - Decomposition of user community into roles

Images   Assign appropriate permissions - Be aware of ways in which a role may violate segregation of duties (SoD). For each grouping, consider the rights necessary for users to accomplish their tasks. Within a system, rights may be cumulative. A user assigned multiple roles may end up with excessive rights. One way to counteract this scenario is to assign a user a specific account with properly separated rights.

Images   Avoid unnecessary rights - When privileges are cumulative, ensure excessive access is not inadvertently granted.

Images   Mitigate workflow violation potentials - Consider situations in which an identified role might be able to affect a workflow outside.

Organizations with constrained resources will find it difficult to fully achieve separation of duties. This is especially true in organizations with a small staff. In these cases, it is common to find that an individual is assigned to multiple roles. This situation makes separation of duties difficult to achieve. However, the following techniques can be of some assistance to help identify potential fraud:

Images   Assign accounts on a per-role basis - An individual should have a separate account for each role used. The rights corresponding to separate roles should be separated to the greatest extent possible.

Images   Prevent those with multiple roles from reading and writing to the same storage area - Where possible, prevent an individual with multiple roles from writing information with one role into an area that could be read by another role. Ideally, the permissions for the roles should be mutually exclusive.

Images   Auditing is vital - Consider implementing object-level auditing for individuals with multiple roles. Identify key areas where abuse might occur, and implement multiple methods to monitor for violations.

Images   Conduct more frequent evaluations - Use external system auditors and more frequent internal audits of the system to ensure that all controls are functioning properly. Audit workflows to assess any improper activity.

As stated earlier, the primary purpose of separation of duties is to ensure that an individual is unable to perpetrate fraud and simultaneously avoid accountability. However, separation of duties can also be a tool used to identify omitted job functions. Various system-related duties and organizational workflows require particular elements that must be filled by an individual to ensure a particular task is completed in its entirety. When a key element of a task is missing, it might cause a critical process to fail at an inopportune time. Missing task elements create a gap in separation of duties. Consider the following scenario depicted in Figure 1.20. In this situation, suppose one administrator is assigned to manage operating systems, and another administrator is tasked with managing networking equipment. The system contains a Linux box running a firewall application at one of the network borders. The operating system administrator views the firewall as a network device because it is functioning as a network device. In contrast, the network administrator views the firewall as an operating system because it is built with commodity hardware and software. Neither has taken management responsibility of the firewall. This represents a gap in separation of duties.

Images

Figure 1.20 - The orphaned Linux firewall

Evaluating separation of duties involves identification of inappropriate and insufficient assignment of rights and permissions related to subject roles in a system. A security architect must consider all of the possible duties within a system and ensure that appropriate separations are identified and that capabilities exist for proper enforcement. An evaluation of separation of duties will look for

Images   Overlaps - The assignment of roles that conflict with one another. Overlaps can be identified by considering the rights and permissions assigned to any given account with that of another, where the two accounts should have distinct roles. Where it is possible for a fraud to be perpetrated, another role should exist that can detect abuse.

Images   Gaps - Roles should be designed to accommodate all operational aspects of the system and organizational workflows processed. Aspects of system management and workflows that orphan critical tasks represent gaps in separation of duties. Unassigned or abandoned tasks can potentially jeopardize the security services of a system.

It is interesting to note the complementary nature of least privilege and separation of duties. Much like the components of the bridge shown in Figure 1.21, the functional aspects of these security principles influence one another. Least privilege is necessary for the proper functioning of separation of duties. Without least privilege, it is not possible to provide separation of duties. Excessive assignment of rights or permissions violates the concept of least privilege and can provide an avenue to exploit a weakness in separation of duties. The misalignment of duties, in the case of an overlap, suggests that one role is in possession of too many privileges. Overlaps in duties violate least privilege. Thus, harmony between least privilege and separation of duties is achieved when rights and permissions are properly balanced in support of the system security services.

Images

Figure 1.21 - The mutual influence of least privilege and separation of duties

Ideally, the design of a system allows easy implementation of separation of duties. However, this is often not the case, and the security architect must consider design aspects that will provide the necessary assignment of rights and permissions while preventing a role or individual from concealing a fraud. Assigning individuals to multiple roles is a common practice fraught with the possibility of inadvertent violation of separation of duties. The totality of a subject’s access should be compared with all other subjects to identify overlaps and gaps. Overlaps violate separation of duties, while gaps represent orphaned aspects of critical tasks that may result in a security control weakness.

Architectures

The design of the security functions for a system should support its security policy. A well-designed system will be capable of making the necessary access decisions. Ideally, the system will fully support a defined security policy. In reality, access control systems are often capable of supporting a subset of the security policy. Nontechnical methods such as procedures and other operational aspects must be relied on to ensure that a security policy is met. To the greatest extent possible, the technical functions in a system should be designed to automate support for the security policy.

An architect must consider in what manner a subject should be allowed to access which objects in the system. A security kernel and the security reference monitor act as the electronic gatekeepers for the system by mediating a subject’s access to authorized objects. The security monitor mediates object requests from the subject. Each request is provided to the reference monitor, which determines if the request is allowed and what degree of access should be permitted. Joint operations of the security kernel and reference monitor should be established to support the system security policy. In this way, a subject will be allowed or denied access according to the security policy enforced by the security kernel and reference monitor.

The security of an entire system depends on the joint properties of the controls implemented; that is, the security of a system is the sum of its collective security mechanisms. All of the technical security controls in a system are collectively referred to as the Trusted Computing Base (TCB). In this regard, the overall security of a system is no stronger than the most vulnerable components of the TCB. Figure 1.22 provides an illustration of common components that comprise the TCB of most enterprise systems.

Images

Figure 1.22 - The trusted computing base

A security kernel is the collection of components of a TCB that mediate all access within a system. It is important to note that a security kernel may be centralized or of a distributed nature. For instance, a router with access control capabilities has a self-contained security kernel. In contrast, a distributed system that implements a domain security model relies on the proper operation of different devices and operating systems. It is collectively recognized as a security kernel. Because it is responsible for mediating access requests, it is a choke point in the system. Any failure in the security kernel will affect the supported system. Ideally, components of a security kernel are not complicated and made up of a small amount of code. This allows for easier analysis to identify any flaws that might be present. The integrity of the security kernel’s components is absolutely critical. The architect should make every effort possible to prevent unauthorized modifications to the kernel as well as provide mechanisms to monitor for any malicious changes.

The most common functions of the security kernel include authentication, auditing, and access control. The operational aspect of the security functions is referred to as the security reference monitor. Generally, the security reference monitor compares an access request against a listing that describes the allowed actions. Figure 1.23 provides a brief overview of a security reference monitor.8

Images

Figure 1.23 - Functional aspects of a security reference monitor

Authentication, Authorization, and Accounting (AAA)

The acronym AAA refers to Authentication, Authorization, and Accounting. It is supported by RFC 2904,9 which specifies an authorization framework for using AAA. The term is heavily used to discuss the access control mechanisms built into networking equipment and repeated to a large extent within many IETF RFC documents.

The design of access control within security architectures is driven by system usage and requirements. In some cases, it is necessary to retain tight centralized management of the access control mechanisms. In other cases, a distributed or decentralized technique is best employed to meet organizational needs. Enabling access control between systems from collaborating organizations requires y–et another approach, which is considered a type of federated access control. The differentiating factor among the potential architectures is the point where an access control decision is made.

Centralized Access Control

An access control system that is centralized relies on a single device as the security reference monitor. Authorization and access control decisions are made from the centralized device, which is referred to here as the access control server (ACS). This means that login as well as resource access requests are handled in one place. Figure 1.24 describes the fundamental processes and decisions that centralized access control systems follow. There are three architectural approaches to achieving centralized access control. Each method conforms to the generic concept of centralized access control as seen in Figure 1.24. However, the approaches are differentiated in the way each handles client resource requests.

Images

Figure 1.24 - Flowchart of centralized access control

Images

Figure 1.25 - Proxy access control system

One approach to centralized access control is for the ACS to proxy client resource requests. In this case, the ACS accesses resources on behalf of the client. Figure 1.25 depicts centralized access control using a proxy methodology. This approach is commonly used in Web portals and database management systems (DBMSs). The front end to the portal or DBMS evaluates requests according to the permissions and rights of the subject and manipulates data based on the request. This approach tightly couples permissions on particular objects, but does not scale well. Permitting access via other protocols or to resources outside of the scope of the authentication server may involve substantial development effort.

The approach illustrated in Figure 1.26 shows client resource requests by way of a gatekeeper mechanism. Conceptually, the ACS allows or disallows client access to network resources. Similar to a guard working a perimeter gate, it allows or denies traffic according to the rights and permissions allocated to the subject. A firewall with an integrated authentication mechanism is an example of a centralized access control device using the gatekeeper approach. This type of approach is primarily used to control access to resources and services at particular locations within the protected network. Generally, it lacks the ability to control granular-level access to resources. This is frequently the case when different proprietary solutions are used for access control and resource availability within the system. For instance, a firewall may allow externally authenticated individual access to a particular file server, but it might not be able to limit which directories are accessed.

The previous approaches to centralized access control exert communication dominance over the client. The ACS by proxy or traffic direction prevents the client from attempting to communicate with resources outside of the bounds established by its rule sets and permissions. A different concept is to allow clients to roam freely in a network, but requests for resources are validated with the ACS before access is allowed. An authenticated subject is given a temporary credential that is used to identify him or her within the network. This approach, shown in Figure 1.27, has a scalability quality greater than that for the approaches previously mentioned. Subjects requesting a resource from a server submit their credential as a proof of who they are. The server with the requested resource verifies the requested access with the ACS. If the credential is authentic and the subjects have the appropriate permissions, then access to the resource is allowed.

Images

Figure 1.26 - Centralized access control using a gatekeeper

Advantages of ACS include the following:

Images   Single Point of Management - Accounts, permissions, and rights are centrally managed. Having all of the security attributes in the same device simplifies access control management.

Images   Audit log access - A centralized access control mechanism also places the heart of the audit logging process in one location. This makes it easier for security and administrative personnel to access and manage the logs.

Images   Physical security - A device centralizing access control provides the opportunity to incorporate additional physical security measures promoting confidentiality and integrity. The access control device can be placed in a physically segregated area restricted to only those who are granted administrative access to the device.

Disadvantages of ACS to consider are the following:

Images

Figure 1.27 - Centralized access control with credentials

Images   Single point of failure - The unavailability of a centralized access control mechanism would prevent access to authorized resources. Localized network and power disruptions may prevent geographically dispersed users from accessing resources. Software or hardware failures in the device itself could also disrupt operations.

Images   Single point of compromise - An attacker successfully compromising the device may be able to grant themselves access to all protected resources. Alternatively, an attacker might be able to compromise the authentication information and subsequently masquerade as the authorized users, thus hiding the nefarious activity.

Images   Capacity - The device itself can be a limiting factor in centralized access control. It might not have sufficient capacity to concurrently handle a large number of requests or connections, given the type of implementation. Locating the device in an area of the network with heavy traffic or bandwidth constraints may impede or disrupt user access to resources.

Common Implementations

A number of protocols exist that support centralized access control. TACACS, TACACS+, RADIUS, and EAP are just a few of the most common access control protocols that a security architect should be familiar with.

Images   TACACS - This older protocol was originally used for authenticating dial-up users. RFC 1492, “An Access Control Protocol, Sometimes Called TACACS” (Finseth, 1993), describes the protocol and suggests that the acronym is short for “Terminal Access Controller Access Control System.” TACACS functions over UDP on port 49 or TCP on any locally defined port. This older protocol lacks many important features found in others that were more recently developed. A critical shortcoming in TACACS is the lack of encryption. All communication from a TACACS client to the server is in cleartext. Using this protocol through an untrusted or public network exposes the session and endpoints to a potential compromise.

Images   TACACS+ - This proprietary protocol by Cisco is based on TACACS. It is primarily used with TCP on port 49. This protocol overcomes the security weaknesses of its predecessor by providing encryption for the packet payload. Authentication, Authorization, and Accounting (AAA) capabilities are built into the protocol, whereas it is missing from TACACS. However, the use of AAA capabilities is implementation specific. Therefore, a security architect must ensure that each TACACS+ implementation is consistent with the policy of the organization. An Internet-Draft memorandum describing TACACS+ is available through the IETF.10

Images   RADIUS - The Remote Authentication Dial In Service (RADIUS) also has AAA capabilities built into the protocol. RADIUS is a centralized access control protocol commonly used in the telecommunications industry as well as by Internet service providers. A network access server (NAS) acting as the gateway to a network passes client access requests to the RADIUS server. This enables RADIUS to be used in a variety of environments, such as dial-up or wireless. Callback and challenge response attributes are built into the protocol, supporting dial-up and other implementations that require additional security measures. According to the Internet Assigned Numbers Authority (IANA), which controls numbers for protocols, UDP is used to encapsulate RADIUS on ports 1812 for authentication and 1813 for accounting. However, some devices still implement RADIUS over ports 1645 and 1646, which were in use prior to the IANA decision. The user password is protected with MD5 and some additional XOR techniques when transmitted. No other aspects of the protocol implement cryptographic measures. As of this writing, RFC 2865 is the latest Internet Engineering Task Force (IETF) memo describing RADIUS. However, there are a multitude of other RFCs that describe other implementations and attributes of RADIUS.11

Images   EAP - The Extensible Authentication Protocol (EAP) is a protocol supporting multiple authentication methods. It operates above the data link layer and therefore does not rely on IP. This enables its use in a variety of wired and wireless implementations. EAP, defined in RFC 3748 (Aboda, Blunk, Vollbrecht, Carlson, and Levkowetz, 2004), is essentially a peer-to-peer protocol. The protocol relies on the lower layer to ensure packet ordering, but retransmissions are the responsibility of EAP. Its design as an authentication protocol prohibits its use for data transport, which would be inefficient. Request, response, success, and failure are the four main types of messages or codes in the protocol. EAP can pass through other authentication methods and protocols as long as they conform to the four types of codes. Both WPA and WPA2 require EAP as the supporting authentication methods. A number of methods of implementing EAP exist. For instance, EAP-TLS defined in RFC 5216 is one method that employs Public Key Infrastructure (PKI) to secure RADIUS in wireless environments.

Design Considerations

Protection of the device used for centralized access control is vital. Designs for this type of access control should include countermeasures that preserve the confidentiality, integrity, and availability of the implementation.

Images   Reduce attack surface - Remove unnecessary services from the device. Prevent the device from communicating on unauthorized ports. Consider the use of firewalls or packet-filtering routers between the device and the rest of the network. Only allow inbound and outbound connections according to those ports allocated to support management, authentication, and access control functionality.

Images   Active monitoring - Monitor network activity with the device. Dedicate an intrusion detection node at each physical network interface. Configure the intrusion detection node to monitor for known attacks as well as any activity outside the scope of the device. Enable internal auditing of the device. Each administrator provided access to the device should have their own unique credentials for access. Regularly review the audit logs for unauthorized changes or activity within the device.

Images   Device backup - The database of users and authorized resources can be very large. Regular backups are needed for disaster recovery and contingency operations. The confidentiality and integrity of the backups must be preserved. Ideally, backups should be encrypted. In cases where encryption is not practical or possible, strong physical controls will be needed. Passwords or other secret keys would be retained on the backups; thus the confidentiality of the data contained within the backups must be protected from unauthorized disclosure or exposure. Integrity may be affected when a loss of physical control of backups is exploited by an attacker, who may allow for unauthorized access that could be subsequently granted during a restore process.

Images   Redundancy - Ensure that sufficient redundancy is contained within the design to ensure continued availability when failures occur. External aspects to the device such as communications and power are normally addressed for other components of the system. However, these items become more important when distributed systems or remote clients depend on the centralized device to access resources unaffected by these types of disruptions. Where possible, implement a secondary access control device that can take over for failures in the primary.

Decentralized Access Control

A collection of nodes that individually make access control decisions through a replicated database characterizes a decentralized access control mechanism. The Microsoft Windows Domain model is a prime example of decentralized access control. Figure 1.28 provides a view of decentralized access control. Note that while authentication information is distributed, access control is applies only to the local resource. This type of access control mechanism has the following qualities:

Images   Distributed - Access control decisions are made from different nodes. Each node makes decisions independent of the others.

Images   Shared database - Distributed nodes share the same database used to authenticate subjects. Changes to the database are communicated among the participating nodes. Ideally, the security policy is shared between nodes as well.

Images   Robust - The access control mechanism continues to operate when an access control node fails or communications are severed.

Images   Scalable - Access control nodes can be added or removed from the architecture with little impact on the rest of the architecture or the access control mechanism as a whole.

Although decentralized access control has advantages, it is not perfect. Some of the issues that need to be considered when implementing decentralized access control include

Images

Figure 1.28 - Decentralized access control

Images   Continuous synchronization considerations - The access control mechanism is only as current as the last synchronization. Excessive gaps in the time between synchronizations may allow inappropriate access to the system or objects.

Images   Bandwidth usage - Synchronization events might consume a lot of bandwidth. Nodes joined through low-bandwidth connections may consume a disproportionate amount of bandwidth when synchronizing.

Images   Physical and logical protection of each access control node - A compromise of one access control node could propagate a compromise to all. Successful attacks against the centralized database in one location could provide the attacker with the ability to attack any node participating in the architecture.

Design Considerations

Inconsistencies in security countermeasures are a common issue with systems using decentralized access control. Servers providing access control services could be located in different facilities in the same region or in different parts of the world. Ensuring that the intended design is consistently applied for each instance can be quite challenging.

Images   Physical security - The integrity of the access control system can be globally impacted by a weakness in the physical security at just one site. Every site hosting a decentralized access control server must have sufficient physical security. Physical security is affected not only by the facility itself, but also by the individuals granted access to the areas housing the system. Physical security controls should be explicitly defined and periodically validated. Every site containing a decentralized access control server must adhere to a minimum baseline of controls.

Images   Management coordination - A decentralized access control system that is geographically distributed may have a multitude of individuals from various offices involved with system development, maintenance, and administration. The activities and duties of these individuals should be considered when assigning rights and privileges to their accounts. Furthermore, actions affecting the baseline of the system should only be permitted through appropriate change control processes. Ideally, the access control system is designed to facilitate separation of duties and least privilege. However, applying these concepts can make management coordination difficult. The design of the access controls should consider organizational structure, such as human resources, that will need to participate in the management of the decentralized access controls and the system in general.

Images

Figure 1.29 - Remote maintenance with a VPN

Images   Remote maintenance - Organizations with decentralized access control often use remote management methods to maintain the distributed aspects of the system. Administrators conducting remote maintenance may iterate through a number of commands that could allow an attacker to compromise the access control system. The activities of the administrators must be protected with encryption during the communication session. Some devices might not directly support sufficient encryption of the entire session. In these cases, alternative methods, such as a VPN, should be used to ensure that administrator actions captured on the network could not be used to exploit the access control device. Where possible, connect the VPN device directly to an unused physical interface on the decentralized access control device. When this is not possible, use secondary network filtering such as a firewall or router to prevent propagation of the output from the VPN to other nodes in the subnet. Refer to Figure 1.29 for an example.

Images   Exclude from Demilitarized Zone (DMZ) - Decentralized access control devices share a common database allowing distributed AAA. An exploited vulnerability in one device, such as that seen in Figure 1.30, can result in compromise of resources dependent on the access control system. Given this situation, it is best to avoid placing a decentralized access control device within a DMZ. Servers and devices within a DMZ should not contain sensitive information. Rather, access to sensitive information by those outside of the trusted aspects of the organizational network should be handled by proxy servers that do not have rights or permissions outside of the DMZ. Figure 1.31 illustrates this point. This design philosophy assumes that servers within the DMZ are at greater risk of exploitation than those that are inside the trusted area of the network. Decentralized access control servers require a high degree of protection and as such should not be placed directly inside the DMZ.

Images

Figure 1.30 - Problems with decentralized access control servers in a DMZ

Images

Figure 1.31 - Using proxy servers in a DMZ to counter attacks

Federated Access Control

Authentication is an essential part of any access control system. Each subject having access to organizational resources must be appropriately authenticated prior to being allowed access. This becomes problematic when an organization extends resource access to business partners and customers. Organizations with a large customer or partner population could potentially need to manage more accounts for external subjects than for those of its own employees. Recent efforts in the areas of Services Oriented Architecture (SOA) using Web 2.0 enabled organizational sharing of resources for individual subjects that are authenticated by an external organization. This is made possible through the establishment of a federation of organizations.

A federation consists of two or more organizational entities with access control systems that do not interoperate or functionally trust each other. At least one of the federation members has authenticated users that desire access to resources from another of the participating organizations. Members of the federation agree to recognize the claimed identity of a subject that has been fully authenticated by one of the members. Rather than require an individual to have a separate account in each organization, only one is required. This is the power of federated access, which is closely related to identity management and single sign-on, but crosses organizational boundaries.12

Federated access control occurs when an organization controlling access to a particular resource allows a subject access based on his or her identity, which is affirmed by the partner organization. The proof provided by a subject typically consists of a token or piece of information digitally signed by the authenticating organization, affirming the identity of the individual. Once the proof is verified, access is permitted to the subject. Access to resources can further be mediated using DAC, RBAC, or any other access control technique deemed appropriate by the organization. Figure 1.32 illustrates an example of how a federated access control could be implemented.

Images

Figure 1.32 - Federated access control

Design Considerations

There are a number of issues that must be carefully considered when implementing federated access control. Given that an organization allows access to protected resources, great care must be given when providing access. This is a prudent precaution because the federation member providing access to a subject of another organization is required to extend a substantial amount of trust.

Images   Cooperative effort - The federation is a joint effort that derives its ability to function based on the agreements of the participants. A written agreement should detail how individuals are authenticated and the handling of proofs used to confirm a subject’s identity. This is minimally needed to ensure interoperability, but more importantly, an agreement should establish a set of security requirements that must be met by the participants.

Images   Mutual risk - Compromised subject accounts in one organization can impact other federation members. A security architect should consider this likelihood when designing controls to support participation in the federation. Nefarious activity could cause damage to both organizations and can even strain the trust of the participating members of the federation. In this regard, abuses and attacks must be considered ahead of time to ensure that appropriate countermeasures and responses are in place prior to accepting subject identities of external organizations within the federation.

Images   Utilize a DMZ - Externally authenticated subjects from a participating federation member should not be allowed direct access to internal resources. Just as in e-commerce, it is best to contain the activities of these types of customers and partners within a DMZ as well. Use proxy servers within the DMZ to retrieve information that is needed. Avoid storing sensitive information in the DMZ.

Images   Exclude access control integration - From the standpoint of a federation member, a previously authenticated subject from another organization should not be considered a subject in the primary access control system of the organization. In this regard, it is better to establish a centralized access control system dedicated to the support of a federated access control rather than mix outsiders with those on the inside. Keeping this type of barrier will prevent accidental or malicious escalation of privilege due to a weakness in the main access control system.

Directories and Access Control

In modern operating systems, information is stored and retrieved from directory structures. Data elements comprising information may be kept in a giant file, but generally, access to the information is in a hierarchal format that is managed either by the operating system or an application. Many commodity software products make use of proprietary formats to store information. This can inhibit the ability to share information between products of different vendors. Furthermore, proprietary formats might not have granular access control capabilities, do not interoperate with host-based access controls, or fail to function with those of other hosts.

The challenges of sharing information were recognized early on in the development of IT systems. The International Telecommunications Union–Telecommunications Standardization Sector (ITU-T) recognized this problem and developed X.500 Directory Specifications. This specification was subsequently adopted by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) as the ISO/IEC 9594 multipart standard.

The X.500 Directory Specification provides the framework to specify the attributes used to create a directory as well as the methods used to access its objects. An entry in the directory has its own Directory Information Tree comprising different types of objects. A directory schema describes how information is organized in a directory, while object classes indicate how each entry is to be structured. Access and management of X.500 information is conducted using the Directory Access Protocol (DAP), which requires an OSI-compliant protocol stack. In this regard, DAP is the X.500 standard.

Protecting the confidentiality of information transferred using DAP is not defined in X.500. However, protection of authentication factors, such as passwords, used to connect or bind to an X.500 directory are specified in the X.509 subset standard. This subset establishes the basis for Public Key Infrastructure (PKI) certificates. It also defines hashing techniques and the use of public-key encryption as methods to protect the confidentiality of bind requests.

The IETF has subsequently defined an alternative method to access an X.500-based directory over IP that is known as Lightweight Directory Access Protocol (LDAP). RFC 4510 (Zeilenga, 2006) provides a general overview of the specifications of LDAP and lists other applicable RFCs. Version 3 of the protocol (LDAPv3) is described in RFC 4511 and supports a limited number of DAP operations over TCP port 389. LDAPv3 also supports the use of TLS to secure directory access over TCP port 636. LDAP supports authenticated and unauthenticated access during the bind process. When authentication is desired, the credentials could be sent in plaintext using simple authentication or secured with Simple Authentication and Security Layer (SASL). Refer to RFC 4422 for a detailed explanation of SASL.

The concept of using directory access protocols is quite powerful. Vendors of network operating systems have embraced the concept and integrated directory access protocols into their products. For instance, the latest Microsoft servers rely extensively on their proprietary DAP implementation known as Active Directory (AD). The initial release of AD had limited support for LDAP. However, AD can now be extensively accessed using LDAP. This enables cross-domain and vendor product access to Windows resources. Security architects are advised to learn the intricacies of LDAP as it is likely to continue to increase in relevance and importance in the future.

Design Considerations

Directory specifications such as X.500 enable an organization to publish information in a way that supports hierarchical access to structured information. However, attention to security must be given when deploying a directory solution.

Images   Access security - Some directories may contain sensitive information. Ensure that directory access is protected with cryptographic measures in instances where an exposure of the information transiting a network would be unacceptable.

Images   Protect authentication information - A Simple Bind operation passes authentication information in cleartext over the network. This could be a default behavior in some implementations where a server accepts anonymous or unauthenticated bind operations. Verification of the correct functioning of the authentication mechanism should be conducted. Consider storing authentication information for publicly accessible servers within a DMZ in another location within the private network.

Images   Leverage existing access control mechanisms - According to RFC 2820, LDAP does not currently specify an access control model. Therefore, it is important that existing access control mechanisms on the host server be employed to protect directory access. In cases where anonymous or unauthenticated access to a directory is permitted, access to other files in the system should not be accessible by the service or daemon running the directory service. This is a prudent measure that can protect the rest of the system in the event a vulnerability in the service is found and exploited.

Identity Management

Identification is a token representation of a particular subject. In the physical realm, official documents such as a driver’s license or a passport are representative of different token forms identifying the same person. Both types of identification have different levels of trust. For instance, a passport is a trusted form of identification when traveling internationally, while a driver’s license is not normally given this level of trust. In contrast, within the United States, a passport is insufficient proof of an individual’s ability to operate a motor vehicle. In this regard, one type of identification is not compatible with the purpose or trust of another even though they refer to the same person.

In the cyber realm, individuals frequently use multiple identities to access different services within the same system. Similar to Figure 1.33, an individual may have one account for network login and another for accessing a database. The most common form of identity is an account name, which is frequently nothing more than a string of characters. An account name, or identification, is the logical representation of a subject in most access control systems.

Individuals join and depart organizations. Assigned duties will also change, which might require extended or reduced access to system resources and services. In these situations, it is necessary to equate the identity of an individual with the totality of access needed to conduct their duties. Multiple identities and changes in access exemplify the need for identity management.

As previously mentioned, it is not uncommon for an individual to have multiple identities within an organizational system. Identities exist for a variety of IT products within any given system. Some of the more common instances include

Images   Operating systems - Microsoft, Unix, Linux, and mainframes are the most prevalent.

Images   Directory services - LDAP-enabled operating systems and applications may integrate with other products or deploy their own identity schemes.

Images   Public Key Infrastructure (PKI) - Each individual issued a public key certificate essentially has an identity within the PKI.

Images   Network authentication - Authentication of network devices makes use of a variety of protocols such as XACML13, Kerberos, SSH, RADIUS, and SAML14. Each may have its own account identifier or integrate with an operating system.

Images   Network management - Simple Network Management Protocol (SNMP).

Images   Database management systems - Many popular databases have their own internal accounts for user access. Some integrate with the operating system for authentication purposes, but an account is still maintained in the database.

Images   E-mail - Most e-mail servers are integrated with the host operating system for identification purposes, but some are not. Access to an e-mail server from outside the organizational boundary may require a separate account in certain cases.

Images   Smartcards - These tokens represent the identification of the authorized holder. A separate list must be maintained matching a token to a particular user. Some smartcards contain multiple identities that can be used to accommodate multiple users.

Images   Biometrics - The identifier comprises features of an individual. Devices supporting biometrics collect the minutiae according to a particular template.

Images   Network equipment - Routers, switches, encryption devices, and printers commonly have console or remote management interfaces. These types of devices frequently have only one type of administrative account that is shared by those responsible for maintenance.

Images   Networked applications - Specialized software and services, such as financial accounting packages, may implement their own access control mechanism.

Images   Web applications - Many Web-based applications relay a user’s account and password to a back-end database. However, some specialized Web applications have their own access control mechanism, which is separate from the underlying operating system or back-end database.

Images   Encryption products - Media encryption products such as those used for hard drive or file encryption may rely on the combined use of an identifier and authenticator. Other forms of identification, such as smartcards, might also be used.

Account identification is the fundamental identity for a system user. Within an access control mechanism, a subject is frequently equivalent to an account identifier. However, a user may have multiple accounts within a system. The identity of an individual user is therefore considered to encompass the totality of his or her access within a system.

Should standardized account-naming conventions be used within a system? The argument for standardization is that it simplifies account management. Using the same naming convention for system and application accounts expedites an administrator’s ability to associate account actions, permissions, as well as manipulate account attributes for a given user. The counterargument is that a guessable naming convention gives attackers an advantage against a system. This may allow them to target particular users for phishing and spyware attacks or conduct focused attacks against system accounts. However, the ease of administration need not give rise to a weakness in the system. Consider the following:

E-mail Filtering - Malicious attachments and messages with mismatched header information should be discarded at the e-mail server.

Malware Scanning - Discovering and eliminating malware mitigates the threat.

Account Timeouts - Establishing low thresholds on the number of failed attempts mitigates guessing attacks.

User Awareness - Training is essential in any security program.

Process Validation - Knowing what is allowed to execute in the environment will enable countermeasures against and discovery of unauthorized software.

Intrusion Detection - Tune the network IDS to detect attacks against accounts. Use host-based IDS to identify malicious or abnormal activity.

Audit Log Review - Look for attempts to access an account from an unusual location. For instance, attempts to log on with service accounts from a workstation strongly indicate system misuse when it is contrary to a system policy.

A standard naming convention, when accompanied by other controls, will simplify account management without sacrificing the system’s security posture.

The assignment of account identifications should be tightly coupled with the concepts of separation of duties and least privilege. This means if an individual does not require access to a particular database for his or her duties, then an associated account should not be given. The security design of a system should accommodate this type of granular access. Furthermore, manual or automated methods should be used to associate, track, and validate an individual with the rights and permissions granted. Manual methods could entail the use of spreadsheets or small databases. Automated management typically requires the use of specialized tools for single sign-on.

Some IT products have only one administrative account for management purposes. Linux, Unix, and many networking devices rely on a root or superuser account for management. It is quite common for IT departments to share the passwords associated with accounts among a group of individuals responsible for administrative support. Accountability is difficult to achieve when accounts are shared. Considerations for achieving some level of accountability for shared accounts include

Images   Use of specialized protocols - Consider the use of TACACS+ or RADIUS to manage access to network devices.

Images   Manual tracking of account usage - Implement a log to track the time and date of an individual’s use of the privileged account. Change the password at short intervals where practical.

Images   Verification of audit events - Compare audit records for the device against those of manual logs or from specialized protocols.

Images

Figure 1.34 - Simple method to control a shared account

There exist some organizations that have sufficient resources to implement specialized software packages to manage shared accounts. Yet, there are many less well-funded organizations that must rely on manual techniques to control these shared accounts. A well-defined procedure or process implementing steps in a particular order is essentially a protocol. In Figure 1.34, a manual protocol that can be used to control access to a shared account is demonstrated.

The manual protocol described in Figure 1.34 is very simplistic, and has a significant drawback. The security person has access to the password as well as the audit records. This might enable persons in that capacity to conceal any nefarious activity they might pursue. Simply put, this simple method may not have sufficient separation of duties implemented in the protocol for some organizations. The situation could be improved if someone else is introduced who is involved with the audit records or manages the password. Figure 1.35 describes an improvement where part of the password is split with a third party identified as a helper. This improved manual protocol also has a built-in feature that enables the security officer to evaluate the strength of the password while establishing better separation of duties as well.

Most devices have physical and logical addresses. The physical, or media access control address, should be unique for every device within a system. Unfortunately, this may not always be the case. In the early days, physical addresses were hardwired into network interfaces from the manufacturer and could not be easily changed. Nowadays, some devices, and even operating systems, provide the ability to change the physical address. The most prevalent logical address is the Internet Protocol (IP) address. It is either manually assigned to a device or acquired through the Dynamic Host Configuration Protocol (DHCP). These attributes should be used to help uniquely identify each host connected to an organizational network. Due to the non-persistent nature of a device identity, it should not be relied upon exclusively as a means of authentication. Rather, device identity is a starting point that is useful to discover unauthorized devices connected to a system. Knowing what is connected to a network is an essential element in protecting a system. Use a scanning tool, such as Nmap, to discover unauthorized devices on a network.15

Images

Figure 1.35 - Password splitting to achieve greater accountability for a shared account

Some devices, such as those used for encryption, have the ability to participate in a PKI. Each device participating in the PKI has its own private key and public key certificate loaded on the system. This provides an enhanced means to identify participating devices. However, the integrity of those participating in the PKI is predicated on the physical security of the device. Any breakdown in the physical security affecting a device calls into question the authenticity of those affected by the weak controls. Use of a PKI to manage device identity is powerful, but it must be moderated with the appropriate physical controls as well.

The security architect should design systems to aid in the management of the complete identity of each subject. Properly designed identification management will help support the concepts of least privilege and separation of duties. Shared accounts present special challenges, and that must be accommodated. Device identification should also be a factor in a system’s design.

Accounting

The initial concept of system accounting had a dual meaning. The term accounting evolved from the mainframe world. In the early days, computing time was very expensive. To recoup resources, system users or their departments were billed or charged back according to the amount of processing time consumed. In this respect, accounting literally meant financial accountability. However, it was also used in the traditional sense of security as in accounting for user actions.

Today, accounting is predominately used to established individual accountability. The term accounting goes by different names such as event logging, system logging, and auditing. Accounting records provide security personnel with the means to identify and investigate system misuse or anomalies. Ideally, all instances where accounts are used as means to identify subjects for access control purposes will have an auditing capability. The purpose of accounting is to minimally establish who accessed what. Recall that the who is the subject and what is the resource, or object. What should also include administrative activity such as policy changes and account management. To be more precise, it is also helpful to know from where and when the access occurred as well as the effect of the event. So accounting or auditing records should minimally include the following:

Who - The subject (device or user) conducting the action

What - Resources affected and administrative activity conducted

Where - Location of the subject and resource

When - Time and date of the attempt

Effect - Whether the action succeeded

Audit records are frequently used to identify anomalous or malicious activity. Investigators piece together information from audit logs into a time line of action to determine what happened. This effort can be more tedious when there is a difference between system clocks. The problem is best resolved by synchronizing the clocks of each device. Manual and automated methods can be used to synchronize system clocks. Regardless of the approach, a central clock must be chosen as the reference by which all other device clocks will be set. It is best to select a clock that is highly accurate and not affected by system activity. Manual methods are often necessary for those devices that lack automated means to update their time. Manual synchronization should be conducted at regular intervals. However, the best approach is to use automated techniques such as the Network Time Protocol (NTP). The protocol enables device clock synchronization from one host that acquires a time-base from another. Figure 1.36 illustrates the use of NTP in a system.

Images

Figure 1.36 - Clock synchronization in a system

Log diversity abounds as vendors are free to proliferate their proprietary formats. Many systems have logs from numerous points and in a variety of formats. Fortunately, most vendors have settled on either syslog or integration with Microsoft Windows Event log as the audit recording method. The locations and types of services generating logs are substantial. Figure 1.37 identifies just a few technologies that can be used to generate security events and information.

Images

Figure 1.37 - Sources of security events, information, and logs

Audit records are most useful when they are analyzed. The most efficient way to analyze audit records and security events is when they are consolidated. Relevant audit records should be regularly collected into a centralized repository that enables their examination. Figure 1.38 shows several methods commonly used to collect security logs and events. Further explanation of these popular methods used to collect audit and security information follows:

Images   Listen - Use a service that receives events as they are transmitted from network nodes. A syslog server is an example of this.

Images   Polling - A centralized server can be used to query other services to collect events periodically. It might also be used to copy log files from a shared directory.

Images   Agents - Autonomous process running on a node collects events as they are generated and sends them to a centralized collection point. Agents can be used to filter log messages and only transmit those that are the most important.

Traditionally, there have been two distinct approaches to Security Management: Security Event Management (SEM) and Security Information Management (SIM). Each of these approaches can be summarized at a high level as follows. Security Event Management evaluates collected information to identify security events, while Security Information Management collects information with some analysis of the information being done. While these systems have existed for some time as independent elements of many security architectures, the combination of them into a single unified system, Security Information and Event Management (SIEM), allows for a more in depth look at the data collected, combining SIM and SEM plus additional capabilities. The additional capabilities of SIEM systems are listed below.

Images

Figure 1.38 - Methods of collecting security logs and events

  1. Data Aggregation: SIEM/LM (log management) solutions aggregate data from many sources, including network, security, servers, databases, applications, providing the ability to consolidate monitored data to help avoid missing crucial events.

  2. Correlation: looks for common attributes, and links events together into meaningful bundles. This technology provides the ability to perform a variety of correlation techniques to integrate different sources, in order to turn data into useful information.

  3. Alerting: the automated analysis of correlated events and production of alerts, to notify recipients of immediate issues.

  4. Dashboards: SIEM/LM tools take event data and turn it into informational charts to assist in seeing patterns, or identifying activity that is not forming a standard pattern.

  5. Compliance: SIEM applications can be employed to automate the gathering of compliance data, producing reports that adapt to existing security, governance and auditing processes.

  6. Retention: SIEM/SIM solutions employ long-term storage of historical data to facilitate correlation of data over time, and to provide the retention necessary for compliance requirements.

Some security-related logs can become quite large, so the period used to collect them is important. File transfers of large logs can consume a substantial amount of network bandwidth. Similarly, an audit log source that automatically transmits events in near real time can affect bandwidth as well. Consideration for the timing of audit log collection should be weighed against operational needs of system users versus security requirements.

Two important considerations regarding audit log collection involve analysis and forensic value. Analysis requires logs to be consolidated in such a way that patterns of activity across multiple logs can be analyzed to detect violations and misuse. Inserting logs into a database is the most ideal way to accomplish this. Log records are decomposed into common elements and inserted into the database for query analysis. Unimportant records are commonly filtered out as they do not add value to the investigation. The decomposition and filtering helps an analyst discover security issues. It may also be necessary for the audit records to be preserved in their original state if they are to be presented as evidence in a court of law. In this case, unmodified logs are kept in a special archive where their integrity is assured. Cryptographic hashes of the logs should also be retained and protected to serve as evidence of the log’s integrity. Accordingly, some logs are parsed for use, while others are retained as potential evidence.

A security architect must consider all of the possible locations where audit logs can be collected. Numerous devices, services, and applications produce a variety of useful logging information. At a minimum, audit logs should be collected from those devices where access control decisions are made. The collection process should be planned to consolidate the necessary logs in a timely manner, avoiding operational conflicts, providing the ability for detailed analysis, while guaranteeing their forensic value.

Access Control Administration and Management Concepts

Access Control Administration

Each type of access control architecture presents its own challenges for managing the fundamental aspects of subjects, objects, permissions, and rights. The security architect should consider the mechanisms that must be implemented to enable access control administration.

There are two principal entity types that are the subjects of access control. Ultimately, a subject is either a human or an automated feature of the system. The architect should ensure processes exist such that accounts assigned to each entity type are

Images   Authorized - Each account is approved and documented. This establishes its authenticity.

Images   Monitored - Accounts should be assessed for inactivity and abuse. Unused and unnecessary accounts are promptly removed. Employ specialized tools to identify abnormal use of accounts.

Images   Validated - The continued need for each account is reviewed on a periodic basis as established by the organization. This affirms the authenticity of each account.

Each person requiring access to a system should be individually identified. This enables accountability as well as the application of granular access controls. Limit the number of accounts assigned to each individual. Avoid the use of shared accounts as this will make accountability much more difficult.

Processes executing on a system do so within a security context; that is, every process has certain rights and permissions. An operating system that relies on an access control model, such as DAC, must regard human interactions and running processes within an appropriate security context. Human interaction occurs through an associated process. This might be a single process or more than one hundred processes. Regardless of the number, the actions taken by the process are forced to conform to the security context of the user. In this respect, processes are not permitted to take any actions beyond what the user is allowed. Processes not executing in the context of a user may have a security context of the system or their own unique context.

Some system processes have the special ability (right) to impersonate a user. When applied correctly, this technique allows the service to execute with the permissions and rights of the client. Processes with the ability to impersonate must be closely monitored. Vulnerabilities associated with impersonating processes may allow privilege escalation and system compromise.

Although permissions are commonly associated with objects within a system, a security architect must think in a broader perspective. A system is a collection of resources that may or may not be considered objects within a given access control mechanism. Each resource may contain many objects that can be managed by one or more access control techniques. Resources provide utility to the end user and may be consumable or enable system functionality. Diversity in resource types complicates the application of access control. Consider the following shortlist of resources common to many systems:

Images   Web server - Traditional Web servers enable access to Web-based content. Vendors are more frequently embedding Web servers into hardware products. In some cases, the access control mechanisms of embedded Web servers do not integrate with other security mechanisms, creating isolated islands that require security maintenance and configuration.

Images   E-mail server - This ubiquitous communication platform may be accessible inside as well as outside of an organization’s security boundary. This resource is commonly abused by those peddling spam and launching phishing attacks.

Images   Networked printer - This device is loosely coupled to the access control mechanisms of a system. A printer is normally dependent on other security measures to protect it from abuse.

Images   Network devices - Switches, routers, and firewalls are resources that enable system functionality. Many have console or remote management ports that enable device configuration. The internal configuration features may not easily integrate with other access control mechanisms within a system. The loss of availability of one of these resources will likely impact the ability of users to access other resources.

Images   Applications - The most common applications are those installed locally on a system. However, advances in mobile code, and the continuing momentum of cloud based system solution architectures demand a broader perspective on what is considered an application and how it should be secured.

Images   Removable media - Removable-media devices such as USB, CD-ROM, tape, and disk drives provide users with access to resources beyond local or networked system storage. These resources enable sharing, backup, and transport of information.

Images   Internet access - Many organizations depend on Internet access as a core business resource. System users depend on the Internet for research, collaboration, and e-commerce. Internet use is similar to human use of fire. If used inappropriately, it can consume organizational resources in time and money. When used effectively, the Internet can be used to forge a wealth of opportunities to transact business and advance organizational objectives.

Images   File server - This type of repository contains the bulk of an organization’s information. A variety of networked storage is available for system consumption. Simple file servers functioning with commodity operating systems are the principal remote storage devices in any given system. Dedicated devices such as storage area networks and network attached storage are gaining in popularity as a means to enable massive storage capabilities for organizations of any size. Integrating these devices within an access control methodology for a system requires attention to detail on the part of a security architect.

Images   Databases - Much of the business intelligence of an organization is kept in database repositories. Access to databases is often routed through a Web tier. But direct access through specialized tools or other databases is also a common practice. Database management systems often have their own access control mechanisms that are stand-alone, or, in some instances, integrate with those of other vendor products. As with file servers, a security architect should give special attention to database access control planning and implementation.

A system user is the principal subject of interest within an access control mechanism. Resource permissions are granted to users individually or collectively. The access control mechanism constrains user interaction with a resource object according to the permission granted. The possible exception to this statement is when a subject is identified as an object owner, such as within a DAC mechanism. In this case, the subject has full control over the permissions on the object. For instance, suppose an object owner only has read access to the object. This would prevent casual or unintentional modification of the object. However, as the object owner, a subject could modify his or her permissions on the object to perform any action desired. It is important to note that object owners have the ability to grant any permission to themselves or to others at their discretion when operating within the context of DAC.

Ideally, every object will be subject to an access control mechanism. This affords the ability to granularly control access to all aspects of a given resource. Unfortunately, this is not always the case. There are many technological resources that are not fully subject to an access control mechanism. For instance, network printers are commonly accessible to any device attached to a network. As long as the appropriate protocols are used, most printers will service any request received. Similarly, file permissions are not present on removable media. Aside from backups, most removable media are formatted using a file allocation table (FAT) or ISO 9660 for DVD and CD-ROM. This is understandable, considering that file-level access controls would not be useable from one access control system to another where a database of subjects is not held in common. Where granular access control is not possible, other methods are employed to control access to the entire resource. Although the all-or-nothing approach is less desirable, it is nevertheless a valid access control technique given security requirements and associated risk.

An effective access control mechanism is not easily bypassed. The mechanism should tightly couple permissions with protected objects. For any object protected, the permissions should be enforced; that is, a subject should not be able to circumvent an assigned permission by manipulating the object or permissions when not authorized. Any situation that allows a subject to exceed permissions represents a flaw in the access control mechanism. Vulnerabilities such as these are regularly discovered in commercial products. A security architect must be acutely aware of this situation when an organization develops proprietary applications with access control mechanisms. Testing of any homegrown security solution for access control weaknesses is critical.

The Power to Circumvent Security

Most systems do not encrypt stored data by default. When a system is powered down intentionally or not, the access control mechanism ceases operation. Data is vulnerable to physical attack when the system is without power. This could allow anyone with physical access to bypass access control and auditing mechanisms. To protect against this situation, a system should be designed with

  1. Auditing of system restart, power on, and power off events

  2. Sufficient physical security measures to prevent or detect unauthorized access to system hardware

  3. Redundant or backup power

All unscheduled power-down or system restart events should be followed up with an investigation to determine the reason for the interruption and to detect if any malicious activity occurred.

File sharing occurs outside and within access control mechanisms. Transmitted e-mail attachments and files transported with removable media are examples of file sharing outside of an access control mechanism. Within a system’s access control mechanism, it is common practice to make a file available to multiple users. A file is considered shared when it is accessible by more than one subject.

Assigning the appropriate permissions to a shared file is critical. A file with more than read access is subject to unintentional modification. Note that this is not to say that modification of the file is necessarily unauthorized. For instance, it is often necessary for people to make modifications to a file during collaboration activity. Modification of files used for collaboration is often expected. But this does not mean that every modification is intentional. Allowing modification of a collaboration file essentially authorizes changes to those who have access. Although this may not be an intended action, it is nonetheless authorized if not explicitly prohibited by policy. In contrast, it is almost never acceptable for ordinary users to modify binary files such as libraries and executables. Providing ordinary users with more than read access to a binary file is a risky proposition. Suppose a system policy prohibits users from modifying program files. In this case, it is reasonable to set permissions that prohibit a user from changing aspects of the file. Given the aforementioned policy, a system that does not restrict a user from making modifications to binary files is said to be noncompliant. Establishing the appropriate permission for shared files is an important consideration for any system.

Files such as word processing and spreadsheet documents are commonly shared during collaboration activities. It is not uncommon to find large directory structures with thousands of subdirectories and documents shared among users of certain groups. Typically, users are free to modify many of the documents within these directories. This is facilitated when permissions at the directory level are propagated to all its child objects. Each new object created in a directory inherits the permissions established for the parent. This situation is a disaster waiting to happen. Because users have read, write, and delete permissions in this scenario, the following situations may occur:

Images   User deletes files or whole directories by accident.

Images   Files or directories are accidentally moved.

Images   A user maliciously modifies, moves, or deletes files or directories.

Images   Malicious code is allowed to modify, move, or delete files or directories.

Each of these situations demonstrates an impact on information integrity and availability. Giving users the ability to modify file attributes, such as content and location, is not a best business practice. Users can make mistakes. Access controls on file shares should be established to prevent mistakes and malicious actions. Restricting users to only read access to resources is an effective method to control accidental or intentional modification to files used for collaboration (Price, 2007).

Peer-to-Peer (P2P): An Unintended Backdoor

Programs with the capability to allow file sharing represent a potential backdoor into an organization’s system. A P2P program running in the context of the user has potential access to all the files that the user does. Normally, the program is configured to allow access to only specific file types or directories. When the program runs, it makes itself available to other systems outside of the protected network. Any flaw in the software or misconfiguration of the program parameters could allow

  1. Compromise of the workstation in the context of the user

  2. Access to sensitive information available to the user

  3. Propagation of malicious software such as spyware, Trojans, viruses, and worms

Although P2P software is an interesting collaboration technology, it is fraught with substantial risk. Users who are not security experts may make poor judgments regarding the installation and configuration of this type of software. Security architects must carefully consider access control implementations to guard against misuse or flaws arising from authorized use of P2P applications.

Database Access

Many organizations use a Database Management System (DBMS) as a centralized information repository. Databases are ideal tools used to manage and share structured data. Some of the functional attributes of a DBMS can be used as mechanisms to support access control. Functionality such as views, triggers, and stored procedures can be leveraged to enhance access control within a DBMS.

Images   Views - These virtual tables are named queries derived from a single or multiple tables. Although a view is virtual, it can be queried like a normal database table. A common use of views is to provide read-only data to the end user, which prevents the user from changing the data in the originating table. A well-constructed view enables granular access control over the database tables. Rather than giving database users the rights to a sensitive table, a view can be used to explicitly identify the attributes need for their duties. As such, users can be restricted to interact with specific columns, rows, or elements based on the attributes of the view’s query.

Images   Triggers - Database events can be preceded or followed by a set of procedures. Triggers can be used to take an action according to database statements issued by the user or actions to interact with individual rows. This is useful to validate inputs from users or take actions based on the parameters or values in the SQL statement.

Images   Stored procedures - These subroutines enable performance of complex data manipulation tasks. Stored procedures enable programming languages, such as C or Java, to be used to conduct intricate actions on data that go beyond the capability of generic SQL. For instance, standard SQL returns an aggregate value based on a given query. Using stored procedures, it is possible to perform multiple unrelated queries and report the result individually or aggregately. Stored procedures are often used in conjunction with triggers to perform data validation or filtering actions.

Views, triggers, and stored procedures permit enforcement of business logic on data elements within a DBMS. These capabilities essentially allow the coding of business rules into database applications, which enables a level of access control that is more granular that the mechanism native to the DBMS. Using the power of these database extensions enables enhancements to security goals, such as separation of duties and least privilege, by restricting user access to only those data elements necessary for the performance of their assigned duties or tasks.

Access to information within a DBMS is enabled through a variety of authentication techniques. Users are identified individually or as a group. Permissions should be applied according to the method of access as well as business rules and the risk associated with the affected data. The various methods of access can be generally categorized as proxied anonymous, proxied access, direct access, and integrated authentication. Each category requires the security architect to consider the implications of the access type for potential data exposures that may occur.

The first category of access involves a logical grouping of all database requests. As seen in Figure 1.39, users are not individually identified, but rather, grouped together. Individual accountability is difficult if no other authentication methods are used to associated connection events with actions in the database. In the figure, user requests for access to database items are proxied through a middle tier such as a Web server. This is a common scenario for public Web sites that provide search-related content to user requests. Instead of users possessing a particular account, the Web server uses database credentials to access data on behalf of the anonymous users. Given the lack of accountability in this scenario and the potential for abuse, it is best that database implementations such as these not include sensitive data in the repository. Segregating data in this regard will ensure that vulnerabilities associated with the database or Web server will not potentially expose sensitive information.

Images

Figure 1.39 - Proxy of anonymous users

In many cases, it is desirable to individually identify users. A front-end application is commonly used to retrieve and format data for user consumption. Users submit credentials, such as an identifier and authenticator, to access protected resources. Figure 1.40 illustrates this as proxied access. This three-tier application involves the client, a server layer, and the database tier. Perhaps the most common implementation of this architecture uses browsers at the client, Web servers in the middle, and a database at the back end. The browser provides end users with the capability to view and interact with the underlying data. The server commonly implements business logic or rules that provide an intermediate access control capability for the backend data. The database may further implement business rules to further restrict data access.

Images

Figure 1.40 - Proxy of authenticated users

This multitier architecture design is a good way to enforce business rules and access control on elements within the database. Figure 1.41 shows one way this type of architecture could be implemented in an actual system. Note that all components are mutually joined through a central switch. Each network component is physically linked to the other. If the network switch does not have access control capabilities, each node is considered logically connected. Although this represents an efficient design for the architecture, it does afford some opportunities for abuse. For instance, if no network-based access controls are implemented, an insider or a compromised workstation would be able to launch attacks directly at the database server. Given this consideration, it becomes evident that the design lacks any defense in depth against these kinds of attacks. Furthermore, if the point of the design is to provide a layer of access control, then it seems reasonable that the logical abstraction from Figure 1.41 should implement physical and other logical access controls as well.

Images

Figure 1.41 - Typical network with a central switch

This situation can be improved through the use of network segregation. Figure 1.42 demonstrates one way to accomplish this. In this diagram, a router is inserted to control the flow of network connections. Access controls within the router only allow traffic to flow from workstations to the server and from the server to the database. This effectively mitigates the potential for attacks focused directly on the database tier.

Images

Figure 1.42 - Improving security with router access control capabilities

Accounts used to access data through a three-tier solution are not always those that are integrated with the system. This is a common situation when data access through this type of architecture is provided to external users. One way to authenticate users is to rely on business logic at the server layer. Identifiers and authenticators submitted from the client layer are evaluated at the server layer. Figure 1.43 provides a graphic of this concept. The server looks up the user identification and password either in a local file or from a database table. Users are granted access if their password matches an active account. Subsequent data requests from authenticated clients are frequently accessed through a common database account associated with the roles of the given user.

Images

Figure 1.43 - Database access through proxy controlled authentication

This approach provides a simple way of enabling a large number of users to access information within a database through a common account. There are some considerations when using this approach:

Images   Password hashes - Stored passwords will need to be hashed. It is best to implement logic that causes the password to be hashed on the client side.

Images   Identity management - A capability should exist to match accounts with active system users. Detailed management aspects such as password history, complexity, and lifetime will also need to be handled along with account dormancy.

Images   Back-end database access - Access control on the network and within the back-end database should be implemented as added layers of protection.

Images   Protection of access control business logic - Any compromise of the software implementing the business rules may compromise all data within the database. The integrity of the business logic must be ensured.

Images   Auditing may be challenging - A separate audit mechanism may need to be developed to support accountability.

A more integrated approach is to rely on the authentication mechanism inherent within the DBMS. Figure 1.44 illustrates the use of a server that brokers access to the database by acting as an intermediary between user requests and database output. Here, the server proxies requests, but is not the primary access control mechanism. Database access control mechanisms, which include identification and authentication, are the mainline of defense for the data. Business logic built into the server has the potential to act as a secondary access control mechanism. This scenario has the following attributes:

Images

Figure 1.44 - Proxy-facilitated database authentication and session

Server:

-   Session management - The server matches a user session with a DBMS session. This allows clients to communicate with the database using foreign protocols.

-   Information presentation - The server provides query and other information in preformatted output for client consumption.

-   Business rule enforcement - Additional security measures or business logic can be applied by the server as an access control layer.

Database:

-   Authenticates users - Users connect to the database with accounts existing in the database.

-   Primary access control - Access control mechanisms within the database such as views, triggers, stored procedures, and other privileges are required to control data access.

As previously mentioned, the server in this situation is a secondary access control mechanism. This is because users may still be able to circumvent the server altogether and access the data. Using protocols known to the DBMS such as Open Database Connectivity (ODBC), a user can access the data directly. In this case, access control is mostly dependent on the mechanism used within the DBMS. From a security perspective, ODBC has a significant drawback. All communications are conducted in cleartext, which means passwords used to log into the database as well as other sensitive data associated with a session could be captured on the network. This weakness in ODBC is affected by the underlying drivers of the protocol. Some database vendors provide confidentiality for the authentication session in some instances. However, in most cases, SQL commands using ODBC are sent in cleartext. However, the weakness can be reduced if the connection is physically or logically segregated from the rest of the network. Physical segregation can be effected through the use of dedicated hardware devices and interfaces. Logical segregation can be accomplished using VPN, IPSec, or a VLAN, which either encrypts the traffic flowing over common links or passes traffic through links which are not in common. The goal is to prevent the cleartext messages from being captured by rogue or compromised devices configured to capture network traffic.

Three-tier Web-based applications are a frequently used architecture to provide controlled access to organizational data. Implementing this type of architecture has its benefits and drawbacks. A security architect will need to weigh these aspects when evaluating new designs and consider countermeasures and monitoring aspects for existing implementations.

Advantages:

-   Data presentation - A common application, the Web browser, provides universal access and presentation of the data. Deployments of proprietary applications are not needed.

-   Centralized access - This type of design makes it easier to connect users to multiple databases.

-   Communication protection - Some DBMSs do not have link encryption. A Web-based three-tier architecture can make use of Secure Socket Layer (SSL) to protect sensitive communications.

Disadvantages:

-   Increased complexity - Organizations must carefully manage code developed to support the architecture. Business rules coded in server-side logic should be controlled by change management activities.

-   Cross-site scripting attacks - Poorly coded or poorly protected Web-based applications are subject to cross-site scripting attacks. This essentially allows an attacker to redirect user requests to other servers of ill repute.

-   Middle-tier security - Any breach in the server may compromise all data accessible through the server. Aggressive access controls and monitoring of the server are imperative to prevent and detect attacks.

Accountability may be difficult - Achieving the desired amount of accountability may require the development of an extensive access control mechanism. This may be fraught with challenges due to programming or implementation flaws. Although using accounts integrated within the database may be desirable, this is also not always practical and may not provide sufficient auditing.

Inherent Rights

All accounts will have a default set of rights assigned to it. The inherent rights are the core set of account attributes. Account types are necessary to constrain human and automated aspects according to a security policy. Assigning a user or process more rights than necessary increases risk. This occurs when a user or process acts maliciously or operates as a conduit for another threat agent. Managing assigned rights is an important aspect of a system’s overall risk management. The three most basic types of accounts are ordinary, administrator, and system:

Images   Ordinary - Normally, these accounts have very few rights in the system. Most users should be assigned this type of account. Where practical, automated processes and services supporting users should also be assigned an ordinary account. Although ordinary accounts are constrained, they could also violate security policies by exploiting a flaw in the system.

Images   Administrator - Only those individuals tasked with the maintenance of the system should be assigned this type of account. Administrators have substantial rights in the system; anyone with such rights could circumvent security policies.

Images   System - This account type supports system functionality. System accounts have enormous powers similar to those of an administrator. Threat agents exploiting weaknesses associated with an account with system privileges may be able to compromise all aspects of a system.

It is important to note that some system components, such as networking devices, commonly provide only one administrator account for management. In this situation, the account is typically shared among multiple individuals with system management responsibility. This can make accountability difficult. In cases like this, manual processes as well as physical and logical access control are needed to ensure appropriate device management. Secure such devices in a locked container or area. Implement processes that establish accountability for device access. This could be as simple as a key control log or as complex as an integrated facility access control system using two-factor authentication. In any case, where inherent rights cannot be separated and accountability is not automated, other processes will need to be implemented.

Granted Rights

Most accounts have the same rights granted to them even though they may have different resource permissions assigned. Accounts can ordinarily be grouped according to their type. Sometimes it is necessary to assign rights beyond those inherent for a given account by granting additional rights. This situation essentially creates an account with nonstandard rights. For instance, an ordinary user account may be granted additional rights so that it can be used as a system service. Altering the rights granted to a particular account can affect the overall security posture of the system. An attacker exploiting the properties of an account with nonstandard rights might be able to propagate an exploit more readily across a system. An account with nonstandard rights essentially establishes itself as a new type of account. The likelihood of a threat exploiting the new rights associated with the account should be properly assessed. Consider the following actions to manage the risk of creating accounts with nonstandard rights:

Images   Document - The rights of all accounts should be documented. When an account is given nonstandard rights, it is important that the reasons for the new rights be clearly enumerated. Risk associated with the changes should be documented as well. Documenting the reason and anticipated effects of the rights associated with the account will help future managers, administrators, and security personnel to understand the reasoning for the change and counter emerging threats when appropriate.

Images   Authorize - A change in account rights can pose new risks for the system. In this regard, management should authorize the change in writing. Ideally, alterations to account rights beyond what has been previously specified should be made using the system’s change control process.

Images   Monitor - An account with nonstandard rights may end up an ideal target for an attacker. Actions of the account should be tracked to identify inappropriate activity or actions outside the scope of its intended use. Audit logs are an excellent resource that should be regularly reviewed for inappropriate account activity.

Just as rights can be granted for a particular account, they can be withdrawn as well. Reductions in rights support the concept of least privilege. There can be enormous security benefits to removing excessive or unused rights. This essentially helps eliminate or reduce possible attack vectors available to an authorized account. The trade-off comes with management overhead associated with reduced rights. If a selected right is removed for a security reason but is later restored to the account, then a previously mitigated weakness may reemerge on the system. Proper management steps should be followed whether rights are granted or curtailed. Following the previously mentioned steps of documenting, authorization, and monitoring will help manage any associated risk.

Change of Privilege Levels

Privileges are the combined rights and permissions allocated to interact with system resources. Altering object permissions and system rights for any given account has the potential to impact the overall security posture of the system. Changes in privileges should not be arbitrary and should only occur after consideration of the associated risk. Modification of permissions should include the involvement of resource managers or information owners. Similarly, altering account rights should only be done in coordination with the system owner or administrators. Inappropriate assignment of privileges can have undesirable consequences. When privileges are too lax, weaknesses may be introduced into the system that might expose sensitive information to abuse or compromise. Excessively rigid privileges can hamper operations, reduce system functionality, and frustrate users. A careful balance between usability and security is required to achieve the security goals of the system. Risk management is an important tool used by the security architect to strike the right balance.

Groups

Managing the security attributes for individual subjects and objects is an enormous undertaking. A system with hundreds of users and hundreds of thousands of objects is too difficult to manage individually. Thankfully, administrative tasks can be simplified when subjects and objects in common are manipulated simultaneously. Administrators make extensive use of groupings of subjects and objects to manage their vast numbers. Groups provide substantial advantages when managing large numbers of users and resources. This paradigm allows the convergence of management actions related to subjects and objects. The most common approach is to establish a group where membership is applied to particular objects or resources. Account membership in a group implies that different individuals have either a similar duty or need to access the same resource. Essentially, a group should be designed with the goal of accommodating accounts with

Images   Similar duties, but involving situations where they access different data. Here, a group is established to efficiently administer large numbers of accounts assigned permissions for a specific resource.

Images   Access to the same data, where each has different duties. This scenario is used to manage resource sharing among divergent types of users.

Images   Similar duties and data accesses. This type of group would essentially be the primary group used to manage a broad category of users such as Administrators or Ordinary users. However, more specific groups such as Accounting or Marketing could also be used.

Managing through the use of groups is indeed a double-edged sword. Although it provides substantial power to mitigate risk, when not properly managed it can cause other problems. Some issues facing a security architect when controls governing group management fail are the following:

Images   Orphaned groups - An excessive number of unused groups may appear on the system. Often, no one knows what the group was used for or if it should be retained. There is generally a reluctance to remove the group when no one is certain of the consequence.

Images   Duplicated groups - Often, multiple groups are created for the same purpose. They may have identical accounts and subgroups or nearly enough of the same members that the aggregate of different smaller groups equates to the sum of an existing group. Although particular members might be excluded from a given group, creating uniqueness, a lack of sufficient management reduces the advantages of the granularity provided.

Images   Separation of duty violations - It may occur that individuals included in the same group assigned to a given resource may ultimately violate separation of duties. Suppose a system administrator is included in a group of security officers managing audit logs. The administrator could potentially remove information in the logs associated with malicious activity. Adding members to groups without consideration of the use of the group can quickly result in unintended weaknesses.

Images   Failures in least privilege - Individuals might be given access to resources beyond the scope of their duties. Careless use of group assignments may result in the inclusion of individuals who should not have access to objects intended to be accessed by other members of the group. The haphazard or reckless addition of accounts may expose sensitive information to those who are not authorized or facilitate the introduction of malicious code such as a Trojan.

Administering group membership is an aspect of identity management. Processes should be implemented to control and monitor group assignments to prevent misuse or abuse. Some actions a security architect could take to establish management of group assignments include the following:

Images   Identify purpose - Record the reason for the group’s existence. Indicate if the purpose is to group accounts according to function, data attribute, or both. Knowing the purpose for the existence of the group will help determine if future assignment to the group is appropriate for a given user.

Images   Membership attributes - Specify what types of members should be included in the group. Indicate if account types exist that would violate separation of duties or least privilege when included in the group.

Images   Resource attributes - List the various target resources that are to be associated with the group. Resource objects could be particular files, directories, or services. Indicate the permission levels to be assigned according to the type of object the group will be associated with.

Images   Control changes - Managing group memberships through a change control process is ideal, but may be difficult because changes may occur frequently. It is better to establish localized controls for changes. Assign one individual to approve changes and another to monitor compliance with the approvals. Coordinate changes with other information and system owners when a change impacts their area of responsibility.

Images   Periodic review - Establish a time frame to reexamine all groups in the system. Determine if an application of a group fails to meet its purpose. Consider each member and assess if they still should be included as a member. Determine if the group has any inappropriate resource assignments.

One way to think about the application of groups is through the relation of accounts with the activity of their owners. In this way, a group can be established according to the types of duties assigned to a user or a particular job function. Groups can be viewed from two perspectives: role based or task based.

Role Based

A role represents an organizational duty. It may be the totality of a job position or it could be a particular function within an organization. Managing account access control through the use of roles is an ideal way to assign rights and permissions. People can more readily identify with the concept of a role as opposed to a security group. When a role is aptly named, people are likely to anticipate what access and capabilities are associated with it. For example, one may correctly assume that the Advertising role is associated with an organization’s Marketing Department. We would further expect that this role would not have the same access as the Chief Financial Officer role. In this regard, access control based on roles may more likely appeal to those who are technical as well as those who are not.

Groups and roles are both a type of collection, but differ in their application. Groups are collections of users, while roles are collections of rights and permissions. In this regard, users are members of a group that can be assigned permissions for various resources. In contrast, a user assigned a role assumes a set of rights and permissions for designated resources. An RBAC implementation can be much more restrictive than those that rely on the use of groups.

An important aspect of an RBAC implementation is mutual exclusivity, which is the fundamental attribute used to establish separation of duties. Mutual exclusivity is a constraint in RBAC that specifies an incompatibility between roles; that is, two roles with mutual exclusivity have resource rights or permissions that must be kept apart to support separation of duties.

Assigning a single role to a user seems ideal, but it is difficult to do when using RBAC in a system or application with diverse types of resources. Limiting the number of roles in a system simply means more users must be assigned to the same role. This has the consequence of reducing the granularity of the access control implementation. Creating a role unique to each individual user seems like a good alternative. However, this has its problems as well. As the number of users and roles increases, it becomes more difficult to create new roles due to conflicts with existing roles. Furthermore, if mutual exclusivity is desired, then it can be difficult for users to even share data. Assigning a single role to a user in an RBAC implementation can work when the scope of the system or application is small. For instance, assigning a role to each user of a mobile or network device is reasonable. The limited functionality of the device and information that would be shared between users means there are fewer instances when sharing of rights or permissions is needed. Therefore, when mutual access to specific resources is uncommon, assignment of a single role to each user is less of a challenge. In cases where the number of rights and permissions is small, single roles may work well. However, assigning users a single role in a complex system with a diversity of data types will most likely result in a failed RBAC attempt.

Implementing an RBAC mechanism in a complex system will involve the creation and assignment of multiple roles. Rather than considering a role to match a particular individual or job function, we consider it to take on the attributes of the various activities associated with it. From this perspective, the organizational role of an individual is essentially the composition of many smaller roles. Suppose the organizational duty of a security officer is assigned the following roles:

  1. Conduct a security assessment

  2. Review audit logs

  3. Advise the CIO

In this example, rights and permissions on resources associated with these activities are assigned to each of the prescribed roles. An individual who is a security officer is allowed to use those roles within the system. This is not to say that a role contains other roles, but rather, that the activity of an individual comprises many smaller, more discrete roles. Reducing the activities of individuals to more discrete roles enables separation of duties while preserving the ability for mutual access. Considering the prior example, a system administrator and security officer would both be assigned the role to Review Audit Logs, but a system administrator would not be allowed to Conduct a Security Assessment or Advise the CIO. This exemplifies the need to define roles at a granular level.

An RBAC implementation with a rich diversity of roles should provide sufficient granularity of resource usage. However, an important consideration is the number of roles a subject possesses at any one time. This raises a couple of questions:

  1. Should a subject be allowed to hold multiple roles or only one role at a time?

  2. What is the impact of access control layering?

Some RBAC implementations only allow one role at a time, while others permit more. The security architect must decide if one role at a time is sufficient from a usability perspective. If it is too difficult to use, then multiple roles will be needed. However, the assignment of multiple roles must consider the possibility of violating separation of duties. A user concurrently possessing multiple roles might allow a separation of duty violation, whereas using one role at a time would not. If the underlying RBAC mechanism supports the ability to specify mutual exclusivity constraints for conflicting roles, then it may be possible to securely use multiple roles concurrently.

A well-designed RBAC implementation with highly granular role assignments and constraints may be made inconsequential when combined with other access control systems. When the access control systems of diverse system components are of different types or do not communicate, the incompatibility may reduce the overall effectiveness of the design. This is a concern when products with an access control capability do not integrate well with those of other applications. This situation is even more problematic when the access control mechanisms are of different types. For instance, an RBAC instance within a DBMS may have ideal role assignments and constraints. Users connecting to the DBMS do so from an operating system implementing DAC. Further, suppose that two particular users are assigned mutually exclusive roles to satisfy separation of duties. A problem may arise when one user copies information from the database into an operating system directory accessible by the other. Although this action may be prohibited by policy, it demonstrates that the system is incapable of enforcing the policy of separation of duties due to the incompatibility of the access control mechanisms. An architect must consider the effect of layering access controls and determine if a reduction in the desired control may occur. The ease with which a control may be bypassed is as important a consideration as the cost associated with its implementation and management.

Ultimately, these questions can be answered by examining organizational policy and goals for the system. The RBAC implementation should support system policy and goals. A valid access control approach should not violate the rules and desired implementation of the system.

RBAC, similar to DAC, is not exactly the same from one vendor to the next. Some vendor products claiming to support RBAC simply allow the designation of a group of rights and permissions as a role, while others make an effort to implement most of its desirable aspects. Ideally, an RBAC-enabled product will allow the allocation of multiple roles and the ability to specify mutual exclusivity. The decision to select an RBAC-enabled product should be driven by business requirements and its suitability for the intended purpose to enforce a security policy.

A true implementation of RBAC is predicated on a mechanism that enforces its attributes. However, this may not be practical or feasible for resource-constrained organizations using commodity systems that desire this type of access control. In these cases, groups could be used to mimic role-based access. Because the term groups generally implies the use of DAC, it is important to note that designing groups to be used as roles:

  1. Requires management discipline. Those creating groups and assigning membership will need to strictly adhere to group usage specifications.

  2. Will not enforce separation of duties. DAC does not have an inherent capability to enforce separation of duties. However, proper use of groups can come close to achieving the same effect without the assurance of enforcement.

User accounts can be administered on a role basis when sufficient planning, management, and monitoring are implemented. The following guidance provides some recommendations regarding the use of groups as roles:

Images   Create groups as if they were roles. The mantra of fully documenting all aspects is essential here. A detailed listing of the attributes and uses of each group as a role is required. Inadequate or inconsistent specification for the design and use of groups as roles will yield less than desirable results.

Images   Map the new roles (groups) to specific permissions and objects. Identify which objects in the systems should have permissions associated with the roles.

Images   Avoid assigning groups to groups. This can quickly degrade the role-based effort. Assigning one role (group) to another will substantially add to the complexity needed and tracking required and should be avoided.

Images   Refrain from assigning account permissions on objects. Maintaining role-based access means that individual accounts should be assigned permissions only through the use of roles. This may be problematic for service and system accounts, but in most cases should not be too difficult for accounts assigned to people.

Images   Issue users multiple accounts. This is necessary if varying levels of rights are needed. This does not mean a user must have an account for each role, but rather, the inclusion of a member in a “role” must not create a situation where an account can easily circumvent its intended use. In this regard, a solid identity management methodology increases in importance.

Images   Leverage system services. Consider the implications of Web-based services, portals, databases, and other automation techniques that could act as intermediaries between subjects and objects. Designing services as a way to proxy access to objects without actually interacting with them can be used as a way to enforce role-based access for critical resources.

Images   Limit object permissions. Avoid giving subjects too much control over existing objects. Providing excessive permissions will make it more difficult to prevent inappropriate assignment of rights with the matching of roles and permissions. Furthermore, curtail the use of multiple roles for any object. This will help simplify verification of access. However, this may require the creation of more roles or those with broader assignment.

Images   Monitor for inappropriate permissions. Because object owners have full control over who can access their objects, it will be necessary to periodically check the permissions on all objects. Object permissions should match what has been documented for each role. An object should have a limited number of roles assigned to it. This will help ease role management and streamline its implementation.

Images   Audit for misuse. Plan to use audit logs as a means to verify compliance. Look for events that indicate inappropriate assignment of rights or permissions. Use of rights capable of bypassing established permissions is another area worth monitoring.

Task Based

The output of an organization is a product, a service, or both. Either of these outputs is achieved by activity within the organization that transforms resources, such as raw materials or intellectual property, through effort exerted by its members. The activities associated with the resource conversions into outputs are called workflows. A task is a discrete activity in a workflow whereby people manipulate resources. A workflow may contain only one task or have multiple tasks. Some attributes of a task include time, sequence, and dependencies. A task may have to be conducted at specific times, but not necessarily in all cases. Most tasks will involve a series of steps and or events that may need to be accomplished in a particular order. An individual may not be able to conduct a task unless other tasks in the workflow are previously accomplished. For example, an editor cannot edit a document until the author has completed the drafting task. In this case, edit and drafting are tasks in the workflow of creating a document. Given the attributes of a task, an access control method supporting workflows is possible. These attributes can be specified as rules governing those aspects of the task. In this way, the rules can specify a subject, object, and permission in conjunction with a particular attribute. Task-based access control (TBAC) attributes consider

Images   Time - This could be the amount of time allocated for a task as well as start and stop times. TBAC could specify a rule that requires a task to take no more than one hour, must be started between 6:00 AM and 8:00 AM, and is required to end by 9:00 AM. Days of a given week, month, or year could also be used as time parameters.

Images   Sequence - A task may have elements that need to be performed in a certain order. If not, the task may produce erroneous output. The task sequence attribute of TBAC could specify the order of events or elements required to complete the task. A sequence can be enforced within a task by restricting access to elements until the ordering is correct.

Images   Dependencies - Some tasks can only be performed after the work of a prior task in the workflow is completed. In this case, the task in question is dependent on the completion of the tasks before it. Similarly, subsequent tasks may also need to wait for the present task to complete. Like the previous example, a document editor cannot conduct the edit task until the author completes the drafting task. Edit has a dependency on drafting in this example. An obvious approach is to prevent the editor from accessing the draft before the author is finished. This may not work in all cases. Suppose the editor is required to periodically review the progress of the author. This may be a different task for the editor, but it is essentially part of the same overall workflow. By allocating permissions on objects according to their stage in a workflow, an editor could be given permission to read, but not write an object in the workflow given the requisite dependencies.

The concept of TBAC is still an emerging topic.16 Presently, there are no accepted standards or definitions of what TBAC entails. However, this does not detract from the usefulness of implementing access control according to the attributes of a task in a workflow. Indeed, many organizations already implement types of access control in workflows. A number of document collaboration suites implement workflows and make use of TBAC enforcement attributes. A security architect should consider methods of leveraging TBAC attributes to enforce access controls on organization-specific workflows.

At first glance, TBAC appears to be closely related to RBAC. This is somewhat true. The important difference arises in the fact that the attributes specified for TBAC are more granular than those for RBAC. For example, a time attribute for RBAC is nondeterministic. This is to say that there is no specification within RBAC that requires the assignment of a role to begin or end at a specific point in time. In contrast, TBAC can have an assigned start and stop time as well as duration. RBAC also does not specify a sequence of events within a workflow, but rather broadly defines an activity indicative of a role. Finally, RBAC does not consider dependencies in a task, whereas TBAC may prevent an individual from starting a task before all of the necessary elements are in place.

Implementing TBAC has many advantages for an organization. It can reduce confusion and errors by automating manual processes that may otherwise be chaotic. The attributes of TBAC can be expressed in any number of ways to support the goals of the organization. Business rules can be created for each attribute, which can yield efficiencies and reduce errors. Standardization of workflows is an important consideration when safety is a concern. For instance, establishing TBAC for vulnerability scanning tasks may be important to some organizations. Suppose a health care organization needs to scan its network for vulnerabilities, but must exclude medical monitoring devices attached to the network. Furthermore, the scanning must be done at nonpeak times. One part of the task may require a preliminary scan to detect nonmedical monitoring devices. The result is fed into the actual scan, which limits the bounds of the vulnerability scan. The actual vulnerability scan has a dependency on the preliminary scan. Implementing an automated process governed by TBAC to control the preliminary scan, subsequent vulnerability scan, and the timing of these events supports the organization’s safety goal and provides a higher level of assurance against mistakes.

Dual Control

Some organizational activities are so sensitive that it is absolutely critical that no single person be able to control an event or access the information. Access control requiring the cooperation of two individuals participating jointly in a process to access an object is called dual control. Within dual control, access is granted after two subjects with the proper permissions jointly agree and perform a rigid procedure to access the protected object. The implementation of dual control involves not only individuals, but a detailed protocol as well.

Dual control is commonly used as an assurance mechanism to prevent a catastrophic event. It is frequently used as a control mechanism for nuclear weapons, large financial transactions, and certificate authority management activities. In each of these cases, a rogue individual with the ability to commit a transaction could precipitate an event with significant consequences. Launching of nuclear weapons by a single individual is certainly chilling. Granting an individual the power to make billions of investment dollars vanish into secret accounts is a misguided reliance on trust. Providing a person with unfettered access to cryptographic keys or certificates used to support e-commerce or the identity of individuals is potentially devastating and irresponsible.

The premise of dual control is that inappropriate access to a sensitive resource will only happen with collusion. Dual control is a hard-core implementation of separation of duties. All individuals participating in the dual-control protocol should have sufficient separation to prevent the possibility of one person unduly influencing another in the process. To further reduce the opportunity for collusion, it is necessary to periodically rotate one or both of the individuals participating in the dual-control protocol. Job rotations are a common procedural control used to identify abuse of access or ongoing collusion. When dual control is implemented as an access control measure to protect against a serious event or loss, it must be accompanied by rotation of the individuals as well.

Usage of the term dual control is somewhat misleading. The term implies that access control requires only two individuals. In fact, a dual control protocol may involve many more participants. Other individuals may be responsible for witnessing the event while others might be assigned to monitor the activities. In some cases, the individuals participating in the dual-control process release the asset so that yet another individual may temporarily access it. One might argue that the involvement of more than two individuals clouds the definition. In a pure sense, that is not the case. Essentially, the protocol is truly dual control when two individuals are needed to unlock the asset. However, there must be at least one more individual participating in the protocol as an auditor. This individual is responsible for auditing the actions of the participants to ensure some level of accountability. The auditor would not have the authority to interact with the asset and would most likely not be allowed in the area when it is unlocked. Nevertheless, accountability is still essential to the integrity of the dual-control protocol and must be followed by someone on the periphery of the actions of those accessing the sensitive item.

Security requirements that point to the need for dual control must be accompanied by management commitment to provide the necessary resources. In uncommon cases when extensive resources are available, it is not ideal to design dual control with enormously complex countermeasures. Complexity can introduce more opportunities for weaknesses. The design of the dual-control implementation should be sufficient to meet the goals and objectives of protecting the sensitive resource. The security architect must bear in mind that elaborate designs are more difficult to implement and enforce. Therefore, the use and design of dual control should be concise, yet sufficient for its purpose. The following are some points to keep in mind when developing dual access control:

Images   A rigid protocol - A successful implementation of dual control is highly dependent on the manual and automated processes supporting it. The interactions between people and machines should be designed to work in harmony. This can only be achieved when the processes used to protect the resource, control access, and monitoring work together without error. All processes supporting protection, access, and monitoring should be well defined in a thoroughly documented protocol. The steps in the protocol should be clear and concise. It should not be possible to execute steps out of sequence without detection of the violation. All errors or violations encountered during the protocol should be met with responses to ensure the appropriate protection measures of the resource have not been compromised. In other words, the protocol should be self-checking and have defense-in-depth countermeasures integrated within.

Images   Layering of physical and logical controls - A variety of controls should be used in layers. Defense in depth is an important strategy for controls implemented. The strategy should call for controls that continue to operate when another fails. There should be little or no dependency from one control layer to the next; that is, a failure in one layer will not make it easier to compromise the next. For example, IT systems are highly dependent on physical security controls. This situation is augmented with other controls, such as media encryption, to protect the system when physical security cannot be ensured. Consider these types of controls for use inside facilities when systems used to protect critical assets must have strong assurances even if a breach occurs at the physical layer.

Power Loss: The Universal Weakness

No matter how elaborate the complexity, redundancy, and failover measures planned by an organization are, they, like all modern systems, are vulnerable to a loss of electrical power. Organizations are heavily dependent on electrical grids for continuous power. Power disruptions occurring due to natural disaster are the most serious to deal with. Many sites plan for long-duration power outages by installing backup power generators. Most of these run off one fuel or another. However, during a natural disaster, fuel supplies may be disrupted. When the fuel dries up and the power goes off, the ability to monitor or control access to critical assets may be lost. Dual control may not even be possible to achieve in these instances. If access to a critical asset is necessary during such events, it is worth considering the use of manual methods to either supplement or replace electronic ones. Planning for the worst is imperative when the magnitude of harm from a loss of access control for a sensitive asset is significant.

Images   Fail secure - Ideally, the protocol and controls cannot be circumvented. Although it is not usually acceptable to rely on absolutes in the world of security, in the case of dual-control implementations, failing in a secure mode must be an important goal. Control failures are a reality. Nothing is perfect. Therefore, it is necessary to plan for imperfection. A failure in a control may indicate that an attack is under way and a breach is imminent. In this case, the protection of the asset is of paramount importance. The magnitude of harm that may occur due to its exposure should be met with a plan to prevent the catastrophic event. Fail-secure measures are alternative steps that further protect an asset from exposure. In the most critical cases, the asset is destroyed. A security architect implementing dual control should consider fail-secure measures that will deny unauthorized access to the resource when other controls fail.

Images   Resource intensive - It must be recognized and accepted that implementing dual control can be a costly endeavor. The involvement of multiple individuals, at a minimum, makes its usage inconvenient. Dual control is a form of access control that is best reserved for situations when the cost of a breach exceeds the cost of control by a large margin.

Images   Frequency of use - Use of the protocol should be kept to a minimum when possible. This will help keep adversaries who might be monitoring operations from analyzing the protocol and identifying weaknesses not previously considered. Ideally, the protocol will only be activated during special events that require access to the protected resource. Not only should use of the protocol be minimized, the timing of access should be varied as well. Vary the times and dates of initiating the protocol. This will help to further thwart adversarial activities. Unpredictability and randomness can increase the complexity of analysis by adversaries.

Images   Keep protocol details confidential - Centralize the entire documented protocol. Avoid distributing details of the entire protocol to its participants. Compartmentalize it to the greatest extent possible by only providing details to those who need to know what is necessary to execute their part.

Images   Auditing - Those assigned to audit or monitor activities should not be provided with the capability to manage access control layers or interact with the protected resource. Those responsible for monitoring will need to have full knowledge of the details of the protocol for those areas monitored. This provides the auditor with the ability to assess situations and make a determination if the protocol was violated. A protocol violation is an indication of an actual attack, human error, or control flaw. The auditor must have sufficient understanding of the protocol and controls to be capable of making this determination.

Images   Key management - To be certain, key management under dual control is not trivial. It is time consuming and should involve multiple participants to ensure integrity in the process. Nevertheless, keys should be periodically changed and must be controlled absolutely. Key management duties should be segregated outside of the scope of the individuals using the keys or managing any of the control layers.

Location

The physical attributes of system components provide yet another opportunity for the application of access controls. The locations of a subject versus the requested resource are often at different subnets, geographic locales, or devices. Access controls can be applied according to the location of the subject and the requested resource. Rule sets, policies, configurations, and network design can be implemented to dictate the extent of an access according to the origin and destination of the request. Locations have logical and physical attributes that can be used to enhance the granularity of other access control mechanisms.

Topology

Nodes in a distributed system are joined together through network topologies. The type of topology used in the network is influenced by the networking protocols and technology implemented. Regardless of the implementation, a common communication point will often be used that connects or routes network traffic between distant nodes. These junctures in the topology are the points in the network that are used to define locations. For instance, a router connecting the organizational network to the Internet defines internal and external locations. A wireless access point is another opportunity to define a location where trusted and untrusted traffic meet. The spans between the defining points are segments. The defining points that separate network subnets, geographical locales, or devices are where rules should be applied to establish location-based access controls.

Subnet

A subnet is used to logically segregate a network. In practice, a subnet is often associated with a particular grouping of users, devices, or areas within a network. The implementation of a subnet is commonly applied to a particular network segment or an entire local area network. This often results in a given location having the same subnet mask. In this regard, the logical address is frequently associated with a particular physical location. This is not exclusively the case, though. Organizations using a virtual LAN (VLAN) may mingle logical addresses with different subnets in the same network segment. Associating subnet masks with their locations can be leveraged to control access. For instance, suppose a policy stipulates that a Sales server can only be accessed from the Marketing group within the LAN. If each major group within the network is allocated its own subnet, then an internal router can be configured to deny connections to all but the Marketing group based on their subnet. Figure 1.45 presents a diagram of this situation. A centralized router is configured to block subnet addresses that are not from Marketing. Because Accounting and Engineering are in different subnets, the router prevents them from accessing the server. This demonstrates a method of implementing access control based on the logical location of a subject and the resource.

Images

Figure 1.45 - Using a router to establish access control

Using subnets as a way to enforce access control is helpful, but not perfect. Abuse can still occur, and this type of control can be bypassed. Furthermore, usability can also be difficult in some situations given the structure of the organization. Some issues with this implementation include

Images   Stealing IP addresses - Insiders may be able to reassign the network address on their workstation. This would allow them to bypass the controls on the router unless subsequent routers are used to counteract this situation.

Images   DHCP - If all DHCP requests are handled by the same server, then this type of access control would be difficult to implement. Alternatively, a DHCP server could be dedicated to the protected area to assign the protected range. This would also require the use of a router to block DHCP requests from traversing into or out of the protected subnet.

Images   Logon from nonstandard location - Users allowed to login from different locations may not be able to access the resource. This would be particularly difficult to accommodate when users must frequently use different workstations at various locations.

Images   Network sniffing - Sensitive-traffic-traversing network segments outside of the protected subnet range are vulnerable to capture. An insider might only need to record network traffic to obtain the information required rather than attacking the server itself.

Geographical Considerations

System nodes access resources from a given network segment. A node existing on a particular segment may access resources in the same segment or that of another. Each segment has a geographical location with respect to the organization’s system. Consider a scenario where workstations, servers, and printers are physically joined to the network through switches. Nodes in a given area are clustered around a particular department within the organization. The switches joining the nodes in one department may connect to more switches or other networking devices, such as routers, enabling connectivity throughout the organization. The segments joining nodes and network devices are often contained in plenum areas such as ceilings, walls, and under floors, concealing their exact route. Although the exact path may not be known, the security architect and system engineers should know the area within a building serviced by a given segment. Knowing the approximate location serviced by a segment enables the application of access controls based on the physical location or a node. In this way, groupings of segments from a given networking device can be associated with a particular physical location. This could be used to control the types of traffic flows into and out of this area, based on the activities associated with the duties of those in the department.

Associating segments with node types is important, if only from an inventory perspective. However, this is less useful from a security standpoint if the types of activities in a segment are not known. For instance, security monitoring personnel may be tasked to conduct periodic port scans from their segment to the rest of the network. Given their duties, this is normal activity. However, port scanning would not be normal activity if it originated from the Accounting department segment. When the physical locations of nodes are associated with normal user activities, security policies can be applied accordingly. In this example, policy can be enforced for the Accounting department segment to prevent port scans from propagating across the network. Likewise, an intrusion detection sensor placed within the segment could be tuned to specifically look for activity that is known to be malicious, depending on the types of users within the segment.

Applying restrictions based on physical location involves the use of logical access controls. The idea is to designate logical access controls based on physical locations rather than logical addressing. The approach simply requires the designation of access controls for a network segment and those behind it having the same nodes or subjects. Consider mapping a network using a tree diagram such as the one shown in Figure 1.46. Mapping a network in this manner can aid in the identification of areas to best apply physical restrictions.

Images

Figure 1.46 - Network mapping using a tree diagram

Logical addressing and subnets enable one to support physical restrictions while still permitting organizational communications. The routing node is configured to only pass traffic for a given subnet that is exclusively dedicated to a particular group of subjects. The routing access control list specifies subnets or addresses that are accessible from a segment. The critical point here is that different segments should be physically separated until they are connected to the node applying the logical access controls. This prevents an insider from spoofing an address in an alternate subnet and bypassing access controls based on physical location. The interface on the network node used to apply the access controls must be exclusively connected to the intended segment.

Establishing access controls for physical segments is complicated when users are not stationary. Users with roaming profiles or mobile users who are allowed to login from any segment in the network make it more difficult to establish behavior patterns. When the scope of network activity is too broad, then the application of access controls based on node location can be too restrictive for organizational purposes.

The idea of using access controls for network segments assumes that requests for resource access originate from a node within the segment. Unfortunately, this is not always the case. A node that has been compromised by malicious code, such as a bot or a Trojan, will access resources according to the commands given by someone outside of the network. In this regard, a compromised node extends the activity of an attacker within the context of an authorized subject on the network segment, which essentially bypasses the restrictions of access control based on physical location. Access controls on the network segment may be of little help if the activity of the malicious code is similar to that of an authorized user. However, many types of malicious code do launch subsequent attacks to further compromise the network. In these cases, access control based on physical location has its benefits, by restricting the propagation. If auditing is enabled for the access control device, the violations will be reported, which provides an opportunity to identify the offending node and eliminate the malicious code.

Device Type

Applying access controls based on the type of device connected to the network is a desirable goal. A variety of device types are regularly attached to networks. Some common types include wired networking equipment, wireless devices, servers, workstations, laptops, and smartphones. The granularity of identification of the device type is based on the security policies or goals of the organization. For instance, it may be desirable to distinguish workstations according to who normally uses them. In this case, a workstation device type might be further decomposed into subsets such as administrator, security, and ordinary user workstations. Enabling access controls according to device type is dependent on two important factors:

  1. Device recognition - Each device type must be recognizable in some way by the access control mechanism.

  2. Policy enforcement - Access control decisions are made according to the device type recognized. Devices that are not recognized should not be allowed to connect and pass traffic in the network.

There are generally three approaches to solving any problem in IT. These involve solutions that are ad hoc, standards based, and proprietary. With respect to the application of access controls for device types, an ad hoc approach involves the use of physical and logical addresses. The use of network-based access control is a standards-based approach worth exploring. Third-party software frequently represents a plethora of proprietary techniques that can be used to solve perplexing problems.

Physical and Logical Addresses

Prior to connecting a device, its physical address is registered. In cases where the logical address is static, this address would be recorded as well. The rules regarding what the device is allowed to communicate with would be encoded into Layer 2 devices, Layer 3 devices, and monitoring devices. Layer 2 devices are configured to lock a port to a given physical address. This is applied for every network port in the organization. Ideally, static logical addresses are used to strongly associate a given port with a unique device. However, this would be difficult to administer in a large organization. When dynamic addresses are used, an assigned range for a given area of the network must be used. This means multiple DHCP servers will be necessary to control which addresses are available in a given segment. Layer 3 devices are used to enforce policies for an explicit device when the logical address is known, or for a range when applied to a particular device type. Network monitoring will be necessary to capture network communication activity. A log of network activity containing logical and physical addresses allows the security team to detect violations to policy and take actions when suspicious activity is detected.

Advantages:

-   Easy to implement - This approach makes use of devices common to many organizational networks.

-   Cost-effective - The costs associated with the design and implementation are minimal. Some additional cost may be necessary to place a sufficient number of devices to control and monitor access. However, these need not necessarily be top-of-the-line models.

Disadvantages:

-   Manual registration - Each device must be manually registered. A centralized database of what is allowed must be maintained. Furthermore, activity on the network will need to be periodically compared to the database to detect violations. This could be automated, but that requires software development, which also may be costly.

-   Lack of scalability - The manual aspects of this approach will not scale well. Implementing this approach in an organization with a large network may be impractical.

-   Limited enforcement - A lack of integration with monitoring and enforcement mechanisms reduces the ability of this approach to fully enforce a policy. Unless the monitoring activity is automated and can create Layer 3 changes on the fly, the ability to fully support the policy is diminished.

-   Spoofing - Insiders could spoof their media access control and logical addresses and thereby circumvent the control all together.

Network-Based Access Control

Another approach is to implement the 802.1X standard as a means of authenticating each device attaching to the network.17 As devices connect to the network, they are authenticated according to the certificate presented. A RADIUS server is used to support device authentication.

Advantages:

-   Standards based - This approach relies on an existing protocol specifically designed to address the problem of device authentication.

-   Policy enforcement - Only authenticated devices are allowed to connect to the network.

-   Auditing support - Connection attempts can be recorded to identify misuse and abuse.

Disadvantages:

Images   Not supported by all device types - Some device classes such as PDAs may not support 802.1X. Existing network equipment internal to the organization also may not be capable of using the standard. Upgrading equipment to fully support this approach could be quite costly.

Images   Manual registration - Each device will still need to be individually issued a certificate. Prior to deployment, it must be ensured that the correct device is issued the appropriate certificate. For many devices, this may require an administrator to manually load the certificate or visually determine that the target device is that which is intended.

Images   Certificate management - The standard makes use of certificates that uniquely identify a given device. This implies the need to develop and maintain PKI to support 802.1X.

Images   Authentication only - The policy enforcement is limited to device authentication. Applying permissions for access to other resources necessitates logical access controls via other devices such as Layer 2 and Layer 3 network equipment.

Third-Party Software

As the problem of identifying device types and applying granular access control continues, it is certain that vendors will develop proprietary solutions to mitigate the problem. The solutions could be as varied as the number of available products. It is difficult to determine the exact advantages and disadvantages of using third-party products because one product may be quite different from another. However, it is surmised that some potential advantages and disadvantages will be common to products that are developed to specifically address the issue of allocating access controls based on the device type.

Potential Advantages:

Images   Specialized - A given technology may be uniquely designed to handle a given problem or task. In this regard, a vendor tool might prove quite useful and successful in enforcing organizational policy regarding device types in a network.

Images   Technical support - Most products have some sort of support built into their purchasing agreement. The support provided may be sufficient to get the product deployed and staff adequately trained on its usage. Furthermore, vendor-supported products typically make updates available to address flaws or improve the native software.

Images   Policy enforcement - A tool that sufficiently supports the goals of the organization should support rule enforcement for a given device type.

Images   Automated deployment - Many products will have automated methods to deploy their tools to all intended targets. This can greatly simplify device management and expedite tool deployment.

Potential Disadvantages:

Images   Cost - Most tools may be too expensive for smaller organizations or departments with budgetary constraints. Reducing the scope of deployment to meet resource constraints may leave aspects of the policy unenforced.

Images   Imaginary functionality - The vendor marketing may be excellent, but the product could fail to live up to the hype. The vendor may overstate the actual capabilities of the product. The tool may not completely satisfy the intended security requirements specified by the acquiring organization.

Images   May not support all device types - A new product may not support all existing device types within an organization. Legacy operating systems and hardware may be outside the purview of the vendor’s product. Furthermore, not all devices may be supported equally. The advertised functionality may not apply to all device types supported.

The aforementioned techniques to achieve location-based access control have various strengths and weaknesses. No individual technique is likely to be suitable for a moderate-sized organization or one that is geographically distributed. Location-based access control is best achieved through a combination of techniques. The design and implementation of location-based access control involves the following factors:

Images   Join logical and physical - Applying access controls based on location requires the use of logical and physical attributes of nodes and networking equipment. Identify all physical and logical characteristics that can be leveraged to achieve the desired level of access control.

Images   Layer controls - Use multiple techniques to achieve defense in depth.

Images   Map and inventory the network - Use scanning tools to identify all nodes in the network and the logical segments where they exist. Access controls in the network may prevent deep scanning. In these cases, it may be necessary to conduct scans from multiple locations. Conduct an inventory of all devices connected to the network. At a minimum, the media access control address, device type, and physical location will need to be collected.

Images   Conduct traffic pattern analysis - Observe traffic patterns to assess normal activity. This information can be used to apply access controls to prohibit inappropriate connections from one area of the network to another. It can also constrain malicious code from propagating across the entire network.

Images   Know where segments exist physically - Conduct physical surveys to verify where the Ethernet cable connects to the networking device and in what area of a building it terminates. Some organizations may already have this information, but it is still advisable to verify their locations.

Images   Implement rules on networking equipment - Rely on networking equipment to enforce location-based rules. Ensure that the equipment is protected from logical tampering and located in physically controlled areas to prevent physical tampering. Trusting network nodes to enforce location-based access controls is less desirable unless it is integrated with network-based mechanisms such as 802.11X.

Images   Monitor compliance - Assess the location-based access control mechanisms by reviewing network device configurations and logs, and watching traffic connections.

Authentication

The basis of access control relies on the proper identification of a subject. This occurs when several elements are joined together in a process that validates a claimed identity. The necessary components of identity verification include

Images   Entity - A person or process claiming a particular identity.

Images   Identity - A unique designator for a given subject.

Images   Authentication factor - Proof of identity supplied by the entity. This element must be reproducible only by the entity.

Images   Authenticator - The mechanism to compare the identity and authentication factor against a database of authorized subjects.

Images   Database - A listing of identities and associated authentication factors. The database should be protected from tampering. Furthermore, the authenticator factors should also be protected from disclosure.

The act of an entity proving its identity is known as authentication. It is an exchange and validation of information that allows the entity further access into an environment. During authentication, an entity presents an identity and an authentication factor as proof that it is who it claims to be. The identity of the entity is verified by the authenticator comparing the authentication factor against a database of all identities. A successful match between the identity and authentication factor with an entry in the database supports the claim, and access is authorized. Once an entity is authenticated, it is recognized as a subject by the access control mechanism and is allowed access based on previously granted permissions and rights in the environment.

The Difference between an Entity and Subject

It is important to note that a distinction is made between an entity and a subject. An individual who is yet to be authenticated is referred to as an entity rather than a subject. This distinction is necessary because a subject represents someone or something with logical rights and permissions in a system. An entity has no logical rights before authentication. An entity graduates to a subject when successfully validated.

An authentication factor is essentially a key that opens a door into an environment. It is something tied to a specific identity and should only be reproducible by the entity. The three most common forms of authentication factors include something that is

  1. Known only by the entity. Passwords and passphrases are the most common.

  2. Held exclusively by the entity. An ATM card is an example of a token held by an owner.

  3. A physical attribute of the entity. Fingerprints are one type of physical attribute.

The most important point of an authentication factor is its reproducibility. Only the entity should be able to present the correct authentication factor. It should not be trivial for an attacker to reproduce the authentication factor and masquerade as the intended subject. Three qualities emerge that can be used to evaluate the suitability of an authentication factor in a given authentication mechanism. An authentication factor meeting at least one of these qualities will provide sufficient confidence that an attacker will not be able to easily masquerade as the intended subject. These three qualities are as follows:

Images   It is known only to the entity - No other individuals have cleartext access to the secret. This means that other means must be employed to capture or persuade the entity to disclose the secret as opposed to its being easily guessed or known by others.

Images   Reproduction of it is infeasible - It should be nearly impossible to create an illegitimate copy. Attributes of the authentication factor should be sufficiently complex or difficult to reproduce such that attempts to do so will have an extremely low likelihood of success.

Images   It is computationally impractical to replicate - Mathematical shortcuts to guess or reproduce the authentication factor should not exist. Brute force attacks should be met with a sufficiently large search space and an extremely small probability of finding the authentication factor over a long period of time.

Sometimes, authentication factors are not used appropriately. On occasion, some authentication mechanisms use the authentication factor as a form of identity as well. The problem with this approach is that a compromised authentication factor may make it difficult to uniquely identify the subject in subsequent environments. This is not an advisable practice and should be avoided when possible.

Strengths and Weaknesses of Authentication Tools

Passwords have long been an authentication factor of choice for many systems. They are cheap and easy to deploy. Generally, users find a password easy to remember and use with a system. However, passwords are not necessarily the best authentication factor. Users tend to choose weak passwords that can be guessed in relatively short periods of time. To counter this situation, organizations require passwords to be sufficiently complex that they are not easily guessed. At this point, users begin to rebel. Passwords are now more difficult. Indeed, they can become so complex that users write them down to remember them. Fortunately, advances in technology have made other authentication factors available for use. Two types of authentication factors that have found their way into organizations are tokens and biometrics. Although some of these have strengths that may surpass fixed passwords, they each have their own unique issues that should be considered prior to rolling a new authentication system into an organization.

Token-Based Authentication Tools

A variety of devices are available that fall into the category of something held exclusively by the entity. These tokens rely on different types of technology to supply the authentication factor attribute.

Badges

These devices contain organization-specific designs, logos, and occasionally a picture of the authorized holder. Badges are most often used in conjunction with facility access control. Individuals posted at facility entry control points check the badge to ensure that it appears authentic, not expired, and is properly matched with the holder when an image is presented. Organizational staff should be trained to recognize badges to determine if an individual within the facility is authorized for facility access, a guest, or perhaps someone who inappropriately gained access. Plain badges are not used to interface with systems.

Strengths:

Images   Low cost - Badge blanks and the equipment used to print them are affordable.

Images   Visually recognizable - A unique design is recognizable by organizational members.

Weaknesses:

Images   Easily spoofed - The combined strengths are also a weakness. Attackers may be able to easily duplicate an organizational badge, which cannot be easily detected.

Images   Relied upon as identification - A badge lacking a feature or attribute permitting someone to verify its authenticity should not be used as an exclusive form of identification.

Considerations:

Images   Mandate use of member pictures - An individual’s image ties the badge to the holder. Organizational members can at least determine that a badge is worn by the correct individual. Pictures also help validation at facility entry control points.

Images   Manage badges like a credential - Badges should be tracked and have a life cycle. Each badge should have a unique identifier tied to an individual. Those responsible for verifying identities at entry control points should be able to compare a badge number with an image of the holder. Periodically, change badge attributes to help visually detect frauds. Recover badges from those departing the organization.

Images   Integrate machine-readable features - Make the badge identifier machine readable using barcodes, magnetic strip, or proximity technologies.

Images   Use different designs according to holder - Use different colors and schemes to differentiate badge holders. Establish different badge holders by classes such as employees, contractors, visitors, maintenance crews, security personnel, escorts, and sensitive area access.

Magnetic Strip

This type of token normally has the same form factor as a credit card and has a magnetic tape stretching the length of the card. Magnetic strip cards are commonly used in facility access controls in a manner similar to that of door lock keys. In fact, the hotel industry makes extensive use of these tokens as room keys.

Strengths:

Images   Cards are low cost - Although the cards are relatively cheap, readers are moderately expensive.

Images   Easy to use - Most people have no problem using these cards.

Images   Lock mechanism easily rekeyed - Lock mechanisms are networked, so they can quickly be updated to reflect which magnetic strip cards can be used to gain entry.

Weaknesses:

Images   Can be copied - The magnetic strip is similar to the media used in audiocassette tapes. Readers can be easily procured to copy the magnetic information.

Images   Vulnerable to magnetic fields - Strong magnetic fields emanating from cell phones, cathode ray tube monitors and televisions, or permanent magnets can alter the information on the magnetic strip. Usually, the cards can be used after recoding.

Images   Easily abused - Magnetic strip cards are so easy to use that people often share them (whether voluntarily or involuntary). Many retail uses and facility access control implementations fail to employ techniques to detect masquerading or inappropriate use.

Considerations:

Images   Use controls similar to those for badges - Considerations listed earlier for badges apply to magnetic strip cards, given that the uses and form factors are similar.

Images   Require two-factor authentication for sensitive areas - A magnetic strip card alone represents a form of identification, but lacks an immediate way to authenticate without human intervention. Using another authentication factor, such as a PIN, strengthens the implementation, allows accountability, and increases the overall level of assurance.

Images   Institute a challenge response process for card recoding - Uses of the card in situations where the user is not immediately known to the issuer should be coupled with controls and processes that track and validate magnetic strip recoding.

Proximity Cards

Many organizations employ proximity or “prox” cards for facility access control. Proximity cards generally have the same form factor as a badge and use wireless techniques operating at either 125 kHz and 13.56 MHz. They provide a means to electronically identify a cardholder. Modern proximity cards operate at the 13.56 MHz range and comply with ISO 14443. Proximity card readers are capable of detecting a card up to several inches away. These cards contain no batteries, but are able to send their information in the presence of an electromagnetic field that powers the card, enabling transmission.

Strengths:

Images   Double as ID badge - The form factor of a proximity card allows it to easily be used as an ID badge as well.

Images   Simple management - Access control systems implementing proximity cards are relatively easy to use. Card activation and deactivation is a simple process.

Images   Ease of use - Proximity card use is relatively intuitive. They can generally be used not only in office environments, but in areas that are exposed to the elements as well.

Weaknesses:

Images   Readers are costly - Although the cards themselves are relatively low cost, readers can be hundreds of dollars and more.

Images   Masquerading - A card that is stolen or borrowed can be used to gain unauthorized access to areas granted to the intended cardholder.

Images   Can be spoofed - Most cards are not capable of using encryption to prevent spoofing. Rogue readers with high output fields and strong sensitivity can be used to capture card identities as people pass by. This information can be passed to specially constructed devices that retransmit card information, allowing access to protected areas.

Considerations:

Images   Follow considerations given for magnetic strip cards - Many of the same issues applicable to magnetic strip cards apply to proximity cards as well.

Images   Periodically reaffirm access - Ensure that cardholders continue to require access and have not left the organization.

Images   Strategically deploy readers - Proximity cards and readers provide an efficient mechanism to implement access control and auditing. They are useful for controlling access and primary entry points and are a more preferred choice than physical keys for protecting sensitive areas.

Images   Protect controllers - Central device controllers used with proximity card readers and electronic locks/strikes should be protected from unauthorized access. Restrict access to the controller to only those individuals delegated the duty of managing the proximity cards.

Common Issues with Token Management

Access control using physical tokens is a basic means of identifying a subject. However, they cannot always be relied upon to authenticate a subject according to something that is possessed. Security architects must consider several common factors when designing an access control system that uses tokens.

Images   Loss of token - A lost token can deny an authorized subject access to a system or facility. A supply of spare tokens and the availability of support personnel should be planned for. Users are bound to discover their tokens are lost while on business trips after hours. Staffing decisions to deal with these eventualities will need to be addressed and planned for.

Images   Token damage - An inoperative token may be apparent when it is presented to the security officer in multiple pieces. However, a malfunctioning token may not be physically evident and is bound to create some level of dissatisfaction and inconvenience for the subject attempting to use it; they will have difficulties accessing authorized resources or areas. Similar to the situation for lost tokens, the possibility that a token will become inoperative at the most inopportune time must be planned for. Further, the organization must have sufficient resources allocated to support end users.

Images   Proprietary systems - Tokens of the same type may not be able to interoperate with those of other vendors. This is especially true when tokens rely on proprietary protocols or form factors for proper operation. Locking into one vendor can prove expensive. Furthermore, vulnerabilities in one vendor product could affect an entire organization and be difficult and expensive to overcome.

Images   Human resource management - Tokens are issued to people who often move on to other opportunities. Those managing access control systems using tokens might not have been informed of the departure of the token owner. Timely revocation of the access provided by a token is a persistent problem in the real world. Security architects should seek ways to partner with human resource departments, payroll offices, and internal entities providing access to contractors and customers to identify when token holders depart the organization temporarily or permanently.

Images   Binding tokens to owners - Tokens without onboard microprocessors are not easily bound to the authorized individual. Most tokens are subject to theft and misuse. Unless the access control mechanism requires dual factor authentication, it is extremely difficult to ensure that a subject using a token is the authorized user. In this regard, tokens should be relied upon as a form of electronic identification, but not as a substitute for authentication.

Biometric Authentication Tools

The field of biometrics continues to grow and attract significant attention. Biometric authentication tools seek to uniquely identify an entity based on something they are. Regardless of the type of biometric targeted, a small amount of data is collected to form a template that describes the attributes of the individual measured with the biometric acquisition device.

Biometrics can be broadly categorized as either physical or behavioral. A physical biometric is a measurable attribute of an individual. It is something that is a tangible feature of a person. Fingerprints are an example of a physical biometric. In contrast, a behavioral biometric is something that is intangible about a person, but sufficiently unique that it can be used to identify an individual. For instance, the way each person uses a keyboard represents a biometric referred to as typing dynamics or keystroke dynamics.18

Users are included in a biometric system through an enrollment process. During this period, users present their biometric for measurement. This may involve several attempts. The acquisition device collects various data elements about the patterns that make up the individual’s biometric. The collected data points are referred to as minutiae, and are stored in a special template file. Multiple submissions during an enrollment are necessary to ensure that a sufficiently broad statistical representation of the biometric is collected. Minutiae templates can be as small as 9 bytes or up to several kilobytes, depending on the type of biometric collected and the algorithms used by the acquisition device.19

Performance Characteristics

When comparing various biometric options, it is important to consider the aspects and performance characteristics associated with the type of biometric:

Images   Accuracy - This is the critical characteristic of a biometric. Accuracy is determined by the ability of a biometric acquisition device to correctly identify an enrolled individual and reject others who are not enrolled. This is arrived at through the use of statistical methods that look for error rates associated with incorrect associations made by the biometric system between the collected template and that held within its database. Type 2 errors, also known as false positives, occur when the biometric device incorrectly identifies an un-enrolled individual as legitimate. Type 1 errors, called false negatives, occur when the biometric system incorrectly prevents an authorized individual from accessing the system. Plotting Type 1 and Type 2 errors on a graph reveals a crossover error rate (CER) that represents the lowest possible trade-off between the two types of errors.

Images   Enrollment time - This is an estimate of the amount of time needed to initially collect the necessary attributes that identify a person. The enrollment time is almost always longer than the verification time. Enrollment in some types of biometrics is challenging and requires multiple attempts. This adds to the base enrollment time for a given biometric.

Images   Response time - This is the average amount of time needed for an acquisition device to collect the biometric and for the system to return a response to the user. In some cases acquisition can take a few seconds. Additionally, according to the type of biometric used, the search for a closely matching template could require more time as well.

Images   Security - Spoofing is the primary attack technique used against a biometric. An acquisition device should include countermeasures that increase its resistance to spoofing or abuse. For example, fingerprint readers should measure heat and pulse to ensure a real person is interacting with the device.

Implementation Considerations

Other factors affecting the use of biometric tools should also be included in any review by the security architect:

Images   Cost - To a large extent, biometric devices are expensive. Some types and technologies are substantially more expensive than others. For instance, fingerprint readers have good performance and are relatively low cost compared to retina scanners.

Images   Acceptance - The user community should not be largely resistant to the use of the biometric. Individuals commonly have health and privacy concerns regarding biometric devices. Their concerns should be tempered with an appropriate amount of education on the benefits and any perceived risk of using a particular biometric acquisition device.

Images   Storage - Biometric data storage should be thought out well in advance. Will the biometrics be stored on a central server or distributed? Perhaps smartcards will be used instead. Wherever the biometric template is stored, there must be sufficient controls in place to protect the data from unauthorized changes or exposure. Access controls over these authentication factors becomes critical.

Images   Changes - People change due to behaviors, lifestyles, aging, injuries, and other medical conditions. These changes can affect the ability of the biometric system to accurately identify an individual.

Fingerprints

This type of biometric is the most well-known technical method of establishing an individual’s identity. The uniqueness of a fingerprint is due to the differences in the ridges and furrows of a print. The characteristic differences commonly used as minutiae about a print include bifurcations, ridge endings, enclosures, and ridge dots. Other minutiae include pores, crossovers, and deltas, which are also used to further emphasize the uniqueness of an individual print.20

Fingerprint readers generally perform well, are low cost, and are generally accepted by the public. Enrollment time is generally quick, and the results are fairly accurate. Minutiae files are relatively small. Collecting a sample minutiae by a subject takes only a few seconds.

Although fingerprint recognition is one of the leading types of biometric authentication, it is not without its challenges. For instance, minutiae templates are not entirely standardized and, therefore, vendor products do not always interoperate. Widely deploying biometrics may require reliance on a particular vendor. This can be problematic if the vendor goes out of business. This might require a massive reenrollment for all users. Fingerprints can be also faked. Given the fact that fingerprints can be captured in an environment and spoofed, their use as a trusted authenticator is risky. The fingerprints of some individuals are difficult to collect due to aging or from exposure to abrasive activity, which could make the print difficult to capture during an enrollment process.

Hand Geometry

The physical attributes of an individual’s hand are sufficiently unique to be usable as a biometric. Considering the finger length, width, and thickness combined with surface area of a hand, it is possible to create rather small template files enabling unique identification. Hand geometry biometrics appears to be quite accurate, with little to no resistance from users to data acquisition. However, cost, size of the device, and its proprietary nature inhibit wide adoption and deployment.

Iris

The patterns within the iris of each eye are quite unique. An iris has many attributes characterized as freckles, furrows, and rings that can be used as data points for a minutiae template. Eye color, the most obvious attribute, is not an element of iris detection technologies. The acquisition devices use black-and-white images to create the minutiae templates of the iris.

Iris-based technologies have proved themselves to be one of the best forms of biometrics (Chirillo and Blaul, 2003). They have very low error rates and are very accurate. These types of biometric devices have a moderate-to-low cost. It is likely that cost will continue to decrease as usage increases globally. Although these devices have good performance, there are issues. Users are still somewhat reluctant to participate in iris-based biometrics. It seems that people fear that the acquisition device could damage their eyes due to the use of infrared technology. Eye movement, proximity, and angle of the acquisition device, as well as lighting, affect the quality of the minutiae collected. These variations can hinder the enrollment process.

Retina

The vascular patterns at the back of an eye are used to form the retina template for this type of biometric. Retinal patterns are very consistent over time and not easily subject to surreptitious collection such as with fingerprint, facial recognition, and voiceprint. Retina templates have a higher minutiae data point count that substantially adds to their uniqueness.

Retina recognition systems are very accurate and very expensive. Spoofing a retina pattern is considered difficult. Aside from cost, the biggest drawback to retina-based biometrics is user acceptance. Enrollment and authentication with a retina recognition device requires an individual to place the eye very close to the input device. Many users fear damage to their eye by the device or contracting an eye disease from a prior user. Eye glasses and contacts also interfere with the proper operation of a retina detection device. Due to cost and acceptance considerations, retina-based biometrics should only be used when a high level of security is essential.

Facial Recognition

Although fingerprint biometrics is sometimes touted as the oldest form of biometrics, the simple truth is that this is not the case. Early systems of identification using fingerprints, although technical, were not automated. Humans have, however, relied on facial recognition as an absolute form of identification for many aeons. In fact, facial recognition through photographs, such as those used on a driver’s license, has enjoyed a much broader deployment than the use of fingerprints. It is only recently that technology has advanced to the point that facial recognition can be sufficiently automated.

There are a number of techniques that can be used for facial recognition. Some of these techniques rely on grayscale image comparisons, thermal scanning to collect heat or blood vessel patterns, and local feature analysis based on differences in defined regions of a face.

Facial recognition technologies have acceptable performance, are low cost, and are not generally resisted by users. However, they do have some issues. Lighting, hairstyles, subject aging, cosmetics, accessories such as glasses or piercing, expressions, and facial hair can affect the accuracy of the detection process. Furthermore, some facial recognition techniques can be fooled with an image of the actual subject presented to the input device. Some facial recognition techniques also fail to distinguish between identical twins.

Authentication Tool Considerations

There is an important attribute of authentication factors that is sometimes overlooked. The strength attribute of an authentication factor lies in its ability to resist abuse; that is, a strong authentication factor is difficult to reproduce by anyone other than the owner. This is a major factor driving biometrics. Most people believe that something you are is superior to something you know or have. Indeed, this seems plausible. However, if something you are is reproducible or can be captured, then there is the risk of abuse. If authentication factors are considered in the context of cryptography, a different paradigm emerges. In cryptography, the algorithm is important, but the most critical aspect is key management. The key is what protects the confidentiality and integrity of the information. Protecting a cryptographic key is essential to the effective use of cryptography. With respect to cryptography, an authentication factor should be handled like a secret. No matter what authentication scheme or algorithm is used, the authentication factor must be protected from exposure. Given this line of thought, the use of authentication factors that can be publicly obtained undermines its strength. A difficult-to-capture or reproduced authentication factor is likely to be resistant to attack, which increases confidence in the authentication of a subject.

As time progresses, new attacks against biometrics will emerge. Spoofing of biometric features such as fake fingers or photographs of the victim will become more frequent. Countermeasures associated with biometric acquisition devices will be needed. Defense in depth involving people, processes, as well as technology will be necessary to help ensure biometrics uses are not abused. It is interesting to ponder the thought that biometrics has been hailed as an ideal replacement for weaker authentication factors such as fixed passwords. Time will tell if biometrics will prove more resistant to abuse than passwords.

However, there exists a more disturbing problem that will challenge biometric deployments. This problem is one that every security architect must carefully consider prior to recommending the acquisition or deployment of any biometric device. A threat that plagues passwords will likely have an equivalent counterpart affecting biometrics. Keystroke loggers are a particularly nasty threat to passwords. Those running within a system can capture all manner of authentication activity using a keyboard. Similarly, a Biometric Template Logger (BTL) could also be used to capture minutiae attributes before they are sent over a network. A carefully placed BTL running as a device driver could intercept the biometric data before the resident application or service has an opportunity to encrypt the data. An attacker could effectively replay this data at a future data and masquerade as a user when the biometric is the sole authentication method. Attacks against stolen passwords are recoverable by removing the offending malware and creating a new password. However, it is not necessarily this simple with biometrics. How do we recover from an attack that exposes biometric minutiae? A compromise to the database or interception of the minutiae template at the collection point would result in a permanent exposure of the biometric. It’s not yet evident that reenrollment would be sufficient to exclude malicious use of the captured biometric template. This disturbing problem should be taken into account in the design considerations and risk assessments associated with any biometric usage.

Fortunately, there exists an architectural design that can improve on the weaknesses of tokens and biometrics-based authentication tools. Using dual factor authentication is one technique that can frustrate the ability of an attacker to masquerade or steal the identity of an authorized user. Where possible and necessary due to the level of risk, a security architect should design dual factor methods into authentication schemes. This approach is an effective way to mitigate known weaknesses and impending attack vectors that are likely to emerge against tokens and biometrics in the future.

Design Validation

Evaluating the design of a system is an important step to ensure that it meets all expectations. Design validation provides the security architect with the opportunity to ask questions and make determinations regarding the sufficiency of the implementation. Validation seeks to determine if the system is adequate, sufficient, and complete. Questions and testing presented by the security architect seek to identify inadequacies regarding any requirements, operations, functionality, and weaknesses in the implementation. A broad perspective of this approach seeks to answer questions in areas such as

Images   Requirements - Have all requirements been addressed?

Images   Operations - Are organizational needs met?

Images   Functionality - Does it work as desired?

Images   Weaknesses - Can it be circumvented?

Requirements specify the minimum attributes that must be supported by a system. With respect to access controls, requirements tend to be very broad and can have multiple interpretations. Requirements are derived from a variety of sources. The most common sources of security requirements are laws, regulations, industry standards, and organizational policies. An efficient way to determine if a system meets access control requirements is to list all applicable security requirements in a matrix. Cull all access control policy statements from applicable sources. Attempt to make each requirement as granular as possible. This may result in a single sentence from a policy statement spanning multiple requirement records in the matrix. Within the matrix, rows are used to identify individual requirement records, and columns specify the record and requirement attributes. Column headings useful in this regard include

Images   Unique identifier - Create a unique numeric or alphanumeric designator to distinguish the requirement record one from another. When discussing security issues, a unique identifier helps organizational members to be clear on which requirement is affected.

Images   Sources - Requirements from different sources may be worded differently, but are essentially the same. Combine similar policy statements from multiple sources to eliminate duplication among matrix requirements.

Images   Requirement - Create a statement that best describes the target requirement. Be as granular as possible regarding what must be implemented from an access control perspective.

Images   Interpretation - Because policy statements tend to be broad enough to have multiple interpretations, it is helpful to provide additional information clarifying the requirement. Identify in the interpretation the breadth and depth necessary for the implementation. For instance, indicate which type of hardware, software, or operations must implement the requirement.

A review of access controls should determine if the implementation meets the mission and operations of the organization. Access controls that are not related to operations or are excessive may be wasteful. There may also be aspects of operations that are unique and not covered by existing requirements. In these cases, a gap in access control coverage may exist. The security architect must strive to have access controls that are sufficient, but not excessive.

The proper operation of implemented controls is a fundamental expectation. Individuals using the system will anticipate that the access control mechanism works without too much difficulty on their part. The two elements of functionality that a security architect should bear in mind when reviewing an access control mechanism are

Images   Operational - The access control must work as intended with the desired results.

Images   Usable - A difficult-to-use access control mechanism will ultimately prove ineffective.

A failure or misconfiguration may prevent an access control mechanism from functioning properly. Similarly, an access control mechanism that is too difficult to use is in some cases just as worrisome as one that is not functioning properly. System users may seek methods to circumvent difficult-to-use access control mechanisms, rendering them ineffective. Therefore, it is necessary to test access control mechanisms through a battery of tests to ensure they are operating as intended and are usable as well. Testing should attempt to exercise controls and evaluate settings to ensure they are properly configured and are not too difficult to use.

A properly configured access control mechanism may meet requirements, operational expectations, and pass testing, and yet still have weaknesses. Flaws in software or hardware supporting the access control mechanism may allow unauthorized subjects or entities the ability to bypass a mechanism. Changes in the environment may also introduce weaknesses into the access control mechanism. Authorized software may inadvertently violate least privilege or separation of duties, nullifying existing access control implementations. This indirect weakness in access control nonetheless affects the mechanism. This illustrates the need to consider testing that extends beyond the mechanism itself. Other dependencies in the system can impact access controls and should also be reviewed for induced weaknesses.

Architecture Effectiveness Assurance

The overall goal of access control design validation is to ensure that the questions regarding requirements, operations, functionality, and weaknesses are not left unanswered. A common thread among these questions is the problem of something missing. Certain requirements may have been missed and not integrated into the system. Some operational aspects may have recently emerged that are not covered by the access control mechanism. A functional aspect of the access control mechanism may not be available to all system users. Insufficient access control coverage introduces a weakness into the system. One way to determine architecture effectiveness is to look for aspects of these questions that are missing from the access control implementation:

Images   Identify access control gaps - The effectiveness of access controls are impacted by gaps in their coverage. Access control gaps can occur for a variety of reasons. Incorrect configurations or missing software components can create areas in a system where access controls are missing. Recent installations may have inadvertently changed access controls, exposing sensitive resources to those who are not authorized. In some cases, a necessary control might be missing altogether. Periodic reviews of access controls should be conducted to identify gaps that may exist.

Images   Policy deficiencies - Insufficient access control coverage can occur due to problems with policies. Policies governing an organization may not adequately specify the requirements for access control. In this case, an organization may deploy ineffective access controls. This is a common problem for organizations that do not have access to seasoned information system security professionals. Another problem with policies involves their interpretation. A poorly worded policy that does not express the necessary access control depth and breadth may lead to weak implementations. In both of these cases, an incomplete policy can lead to poorly architected access controls.

Images   Look for obvious ways to circumvent controls - Security is imperfect. With this understanding, it is evident that no control will be 100% effective. Furthermore, some controls may be easier to bypass than others. Attackers will certainly look for ways to get around access controls in a system. The security architect should also search for obvious ways to bypass existing access control mechanisms. One prime example is unencrypted passwords sent over the network. A number of commonly used protocols such as Telnet, Open Database Connectivity (ODBC), and Post Office Protocol (POP) transmit passwords in the clear. An attacker sniffing the network could easily capture these passwords and masquerade as the authorized user, effectively bypassing the intended purpose of the access control mechanism. Adding point-to-point encryption is one way to overcome this type of weakness. IPSec could be used between clients and servers to protect the passwords and still enable the use of these protocols.

Weaknesses are likely to occur through errors, omissions, or flaws in the design. These issues can be accommodated by forward thinking in the design process. Two ways to plan for emergent problems are to create contingency countermeasures and exploit defense in depth:

Images   Identify countermeasure strategies for emergent vulnerabilities - Flaws in software continue to plague the IT world. Although the problem with buffer overflows has been known for quite some time, they still occur. It is best to anticipate that flaws in critical areas will be discovered. A security architect can help mitigate the eventual emergence of flaws by devising methods that limit exposures to a newly discovered vulnerability. Evaluate different techniques that could be used to block or slow the exploitation of a flaw. Identify ways to incorporate control layers in the various protocols and resources within the network to limit the scope of a exploited vulnerability.

Images   Use defense in depth to counteract weaknesses - Fully leverage people, policies, and technology to detect and defend against weaknesses in access controls. Design layers into access controls such that a failure in one layer will not compromise another layer. Implement redundant and backup controls to counteract failures. For example, a VLAN can be used to segregate user network traffic, limiting access to particular resources. Suppose a network has resources that should not be accessed by those who are not authorized. In a network with no traffic restrictions, shared resources are frequently protected with file-system-based access control lists. However, if the access control list is accidentally altered, allowing inappropriate read access to the protected resource, then a compromise may occur. A VLAN that excludes the protected resource from those not authorized represents an additional measure protecting the resource from an inadvertent compromise. This supplemental control acts as a defensive layer protecting network-accessible sensitive resources from failures in file system access controls. Integrating a defense in depth strategy can reduce compromises due to inadvertent or malicious changes.

Testing Strategies

The effectiveness and assurance of an access control mechanism is determined through testing. The goal of testing to is to conclude if the access controls adequately support or enforce security policies. Testing is essentially a validation that the policies are integrated into the system through the application of countermeasures. Security requirements form the standards by which the access controls are compared. Compliance with a requirement supports assurances that the controls within the system are effective.

Supporting versus Enforcing Security Policies

In the world of information security, a “system” is the entire collection of equipment, software, facilities, people, and processes interacting with organizational information in the electronic environment. A system supports a security policy when aspects minimally provide an ability to direct policy compliance and can also be used to detect deviations. A security policy is supported if methods exist to guide user behaviors along activities meeting security requirements. For example, a “Rules of Behavior” guide directing users to create complex passwords supports policies underscoring the need for passwords that cannot be easily guessed. The guide is a tool, and training is the process by which users are informed of the policy and correct behavior. A system with automated mechanisms that disallow noncomplex passwords is said to enforce the security policy. From this perspective, policy enforcement occurs when the system prevents (or at least obstructs) users from not complying with established security requirements.

Testing Objectives

Testing seeks to identify the extent to which access controls support security requirements. Assurance of architectural effectiveness is predicated on the comprehensive testing of these objectives. Access control testing objectives in the following list help the security architect determine if the controls are

Images   Implemented correctly - System controls should support all security requirements. The implementation of the control should be in accordance with the appropriate depth and breadth to support the security policy of the system. Where applicable, access controls should enforce the security policy too. This objective is used to draw a direct link between policy and system. A correct implementation supports or enforces a security requirement as directed by policy.

Images   Operating as intended - The implementation of the control should meet the intent of the requirement and its design. Controls should be properly configured to support the intended operation as identified in the system design and relevant security requirements. Access controls should work as designed. This access control objective is used to assess the adequacy of the control’s functionality.

Images   Producing desired outcome - Determining the degree to which access control supports operations and whether a weakness exists is the purpose of this objective. The implementation of a control alone, or in combination with others, satisfies the security requirements. Any gap within a control that is not supported by another control indicates that the control is not fully producing the desired outcome.

Testing Paradigms

Testing is conducted to evaluate to what extent an access control meets it objectives. Different testing paradigms are useful in determining the effectiveness of the control. Each type of testing approaches the control from a different angle. No single paradigm should be relied on exclusively. A security architect is encouraged to include aspects of each of the testing paradigms in the overall testing strategy to ensure that the objectives are met. The different testing paradigms include

Images   Exercise controls - It is important to know if the control is working properly. This type of testing primarily verifies if the control works as intended, but it can also determine if the implementation is correct, with the appropriate output. A determination is made by running test cases and reviewing the results to ensure that the result is what was expected. Consider a test that determines if the access control enforces an account lockout policy for failed logon attempts. The test involves a successful login followed by subsequent logins with an incorrect authentication factor. When the number of intentionally unsuccessful logins equals those specified in the policy, a review of the account properties is made. If the account is locked out, the test passes. If not, then the control fails. This type of testing is especially important for in-house software, such as Web-based applications that do not have the benefit of beta testers, such as other commercial off-the-shelf (COTS) products.

Images   Penetration testing - The testing paradigm seeks to discover if an access control can be bypassed or overcome. A control is bypassed when the penetration tester is able to circumvent the control by creating an alternate path to access the target. This may involve exploiting other flaws in the system. Another tactic of this paradigm is to crash through an existing control using brute force techniques that overcome the control. A successful brute force attack often indicates a flaw in the access control mechanism. The main thrust of penetration testing is to confirm that the outcome of the control is correct. However, if the testing is properly designed, the results can be used to indirectly determine if the control is configured and operating correctly.

Images   Vulnerability assessment - Identifying flaws in a system is a critical activity. This type of testing is essential to locate known software and configuration weaknesses that could be exploited. Vulnerabilities in software packages are discovered on a regular basis. The organization must regularly conduct vulnerability assessments to locate and correct flawed software. Some configurations in a system may also provide an attacker with avenues to compromise a system. Vulnerability assessments can be used to identify weak or inappropriate settings in the system, such as open file shares or weak ACL. This type of testing relies heavily on the use of automated tools. Manual methods can be used to identify vulnerabilities too, but are labor intensive.

Regardless of the testing paradigm followed, there must be sufficient testing conducted to determine if each of the testing objectives for each security requirement is met. A security architect is most likely to use all of the paradigms at one point or another to test an access control mechanism. Just as layering of security controls provides defense in depth, conducting an assessment from different perspectives can enhance test quality and results. Testing should be conducted using written test procedures that collectively evaluate the test objectives for each security requirement. Using a battery of paradigms will normally yield a comprehensive view of the control under test.

Repeatability

A hallmark of a quality-testing process is repeatability. This is a quality demonstrating that the testing process will provide the same or similar results when conducted by different individuals on a static system for a given standard. It is important to note that results may not always be identical. Minor changes on the system may yield slightly different results. So long as the results fall within the parameters of the standard specified by the security requirement, there is sufficient assurance that the testing methodology is sound. Test results that are substantially different on a static system indicate either a non-repeatable testing process or a flaw in the system.

Methodology

A standardized methodology is needed to allow others to duplicate the access control testing. This can be accomplished by creating a process that allows the explicit specification of testing parameters while incorporating result tracking. Once again, a matrix approach is a useful way to organize this information. Suggested columns for a testing matrix include these items:

Images   Test number - A unique identifier to distinguish one test step from another.

Images   Requirement number - Matches the unique identifier in a Requirements Matrix.

Images   Requirement - This should replicate the associated statement in the Requirements Matrix. It is the standard driving the expected result.

Images   Assessors - The name or identification and contact information of those involved in conducting the test.

Images   Test date - The time frame of the test. A particular test might take a minute or a month to complete.

Images   Test procedure - The explicit sequence of actions to be performed. It should be put together in such a way that it evaluates each of the test objectives. A detailed test procedure supports repeatability.

Images   Expected result - This is the outcome of a test given the requirement is met.

Images   Actual result - The result of the test performed. For brevity, it is suggested that a short statement such as “Compliant” be used when no failures are noted. Report as much detail as appropriate for noted failures. Minimally report which aspect of the test failed, as well as the identification of the device, process, document, or interviewee associated with the failure.

Images   Corrective action - A detailed explanation of mitigating controls, corrections, or acceptance of risk for the failed test. An initial report of the failure should include a recommendation. The details of this item should reflect system management’s decision to implement the recommendation or some other course of action.

Images   Validating agents - The name or identification of the individuals confirming that the corrective action was successfully performed.

Images   Validation date - The final date when the validating agents confirm that the corrective action was in fact fully implemented or approved.

Developing Test Procedures

Overall, testing determines if the system sufficiently supports the security policy. The vagueness of security policies can make this a difficult task. The intent of the policy can be lost when applied at a granular level. A truly focused test may lose sight of the purpose of the security requirement when the relationships between the elements of access control are not fully considered. The security architect should bear in mind the relationships between access control attributes when developing test procedures. Consider the following relationships when developing access control tests:

Images   Entities and authentication factors - Subjects are allowed access to the system or resources based on their ability to successfully authenticate. Accountability also relies on the correct association between an entity and subject via the authentication factor. Exposure of the authentication factor destroys accountability and threatens the other security goals as well. Access control testing should ensure that the link between an entity and authentication factor is resistant to compromise and tampering.

Images   Subjects and rights - All subjects in a system have been granted some rights. These rights should closely match the roles or duties assigned to the user or process. Access control testing should consider whether the rights associated with subjects of the test are appropriate or not. A weakness in an access control mechanism may allow subjects to directly or indirectly increase their rights in the system. Testing should determine if subject interactions with the system could result in the ability to increase rights or not.

Images   Objects and permissions - Determining if a given object has the correct permission is simple in execution, but challenging from an assessment perspective. This is particularly true for information repositories such as file systems. Top-level directories may have the correct ACL, but files deep within the structure may have the wrong permissions or be contained in an inappropriate location. Access control tests should consider objects and permissions, but in some cases, obtaining a valid assessment result with a high degree of confidence may be quite difficult. At a minimum, those objects that are particularly critical, such as password files, should have their permissions checked.

Risk-Based Considerations

An evaluation of the effectiveness of the access controls within a security architecture may reveal a variety of weaknesses. Some of the weaknesses may have been known prior to the testing, while others may not have been evident. In an ideal world, sufficient resources exist to implement effective countermeasures when weaknesses are discovered. Sadly, this is not usually the case, and often, an action must be taken that is less than ideal to address the problem. A security architect will frequently face this type of dilemma. Resource constraints, although problematic, need not be viewed as a crisis. Risk-based decisions are an effective way for managers to designate temporary or permanent alternatives to the problem. Achieving an optimal solution that keeps risk as low as is practical requires careful contemplation of the options. The following list provides some options that can be used to support risk-based decisions:

Images   Unconventional alternatives - A security requirement may be met through a variety of methods. Identifying options to support risk-based decisions may call for some creative thinking. Solutions might involve the use of multiple tools or integration with manual methods to achieve the security requirement. Existing controls might be sufficient to mitigate the risk. Think about the different ways other controls could be used to achieve the same effect. This may simply necessitate enabling a software option or modifying an existing process or procedure.

Images   Enumerate risk - Innovative options may incur new risk. Less than ideal solutions may be accompanied by new problems that were not previously considered. Any increase in risk associated with alternatives must be communicated to management for its consideration.

Images   Monitor weaker controls - Insufficient controls can be bolstered with monitoring. An existing control that does not fully satisfy the testing objectives for a given requirement does not imply that it is flawed or useless. In some cases, monitoring combined with a weak control provides sufficient augmentation, allowing the requirement to be met. In instances when this is not the case, a carefully implemented monitoring scheme can still provide the ability to detect misuse and abuse. Monitoring can include automated or manual processes to support the weak control. The main point of the use of monitoring is to identify compromises that cannot be prevented through a weak control.

Images   Cost sensitivity - Recommended control options for risk-based decisions should be practical from a resource perspective. The cost of a recommended tool or process should not exceed the value of the assets affected. When a tool is needed, compare costs and capabilities among various vendors. The most expensive tool might be ideal, but budget-conscious options may be suitable. It may be desirable to hire additional personnel to address the problem. This can also be an expensive alternative and is not always feasible. Automated mechanisms and alternatives must be leveraged to the greatest extent possible before requesting human resources.

Images   Manual processes - Most automated processes can also be accomplished manually. Although automated processes are normally more efficient, their cost may be out of reach for an organization. Manual techniques may be sufficient when automation is not cost-effective. The implementation of manual access control methods must be supplemented with explicit documentation and sufficient end user training.

Images   Open source solutions - The Internet is full of low-cost alternative tools. Open source tools, which are usually downloadable at no cost, are worth consideration. There are some excellent open source tools that can help augment or replace weak access controls. The downside to these solutions includes support and trust. An open source tool may not have any support and could conflict with other software in the organization’s environment. Deployment and use of the tool may be complicated. Experimentation and learning may be a burden for the existing staff. Trust in the integrity of the tool is another consideration. Many open source organizations try to ensure their code does not contain backdoors or malicious aspects. When practical, an internal source code review of an open source tool should be conducted prior to compilation and deployment. Subsequent monitoring of the activity of the tool should be conducted when an internal source code review is not practical.

Summary   Images

The automation of access control techniques appears to have evolved beyond tight-fisted control of a USB Flash Drive by incorporation of a rich variety of methods that support authentication, authorization, and accounting. The basic attributes of AAA can accompany a well-designed and well-implemented access control system. The essence of access control is to prevent information from falling into the wrong hands by controlling access to the data through techniques that allow accountability and verification of changes, or prevent tampering with our most precious information files. Possession of a USB Flash Drive seems to be the ultimate form of access control. In this case, the security policy is known and invoked by the owner of the disk. The physical handoff of the disk in a face-to-face exchange is a strong form of authentication and authorization. Subsequent retrieval of the drive enables verification that the data is intact. Modern access controls provide techniques that allow information to be shared on an unprecedented scale. Implementations of strong cryptography associated with authentication mechanisms and protocols seem sufficient to provide a high level of confidence that access to the information is secured. Yet, given all of the potential techniques and implementations to achieve AAA, assurances are tenuous. Insider threats, malware, and other abuses can circumvent the best access control mechanisms and not prevent a loss of data confidentiality in most systems. In this light, it may seem that controlling information on a USB Flash Drive is the best form of access control. However, it is not. The user borrowing the USB Flash Drive could just as easily make inappropriate copies of sensitive files too. Trojan horse applications could have been used on the shared machine to further capture sensitive files. The problem has always been with us. Access control is helpful, but knowing where information flows is a highly relevant aspect of a system’s use that is insufficiently monitored. Indeed, access control can be harnessed by the tenacity and creativity of a security architect dedicated to the cause of defending organizational information against exposure. Wherever information resides, access controls must be used to control information flows.

Images   References

Aboba, B., Blunk, L., Vollbrecht, J., Carlson, J., and Levkowetz, H. (2004). RFC3748—Extensible authentication protocol (EAP). Retrieved from http://www.faqs.org/rfcs/rfc3748.html.

Bertino. E., and Sandhu, R. (2005). Database security—concepts, approaches, and challenges. IEEE Transactions on Dependable and Secure Computing, 2(1), 2–19.

Bishop, M. (2003). Computer Security: Art and Science. Boston, MA: Pearson Education.

Chirillo, J., and Blaul, S. (2003). Implementing Biometric Security. Indianapolis, IN: Wiley.

Finseth, C. (1993). RFC1492—An access control protocol, sometimes called TACACS. Retrieved from http://www.faqs.org/rfcs/rfc1492.html.

McCollum, C. J., Messing, J. R., and Notargiacomo, L. (1990). Beyond the Pale of MAC and DAC: Defining new forms of access control. Proceedings of the1990 IEEE Symposium on Research in Security and Privacy, 190–200.

Melnikov, A., and Zeilenga, K. (2006). RFC4422—Simple authentication and security layer (SASL). Retrieved from http://www.faqs.org/rfcs/rfc4422.html.

Park, J. and Sandhu, R. (2004). The UCONABC usage control model. ACM Transactions on Information and System Security, 7(1), 128–174.

Price, S. M. (2007). Supporting resource-constrained collaboration environments. Computer, 40(6), 108, 106–107.

Popescu, B. C., Crispo, B., and Tanenbaum, A. S. (2004). Support for multi-level security policies in DRM architectures. Proceedings of the 2004 Workshop on New Security Paradigms, 3–9.

Relyea, Harold C. (2008), Security Classified and Controlled Information: History, Status, and Emerging Management Issues. Congressional Research Service Report for Congress, Order Code RL 33494.

Rigney, C., Willens, S., Livingston, Rubens, A., Merit, Simpson, W., and Daydreamer. (2000). RFC2865—Remote authentication dial in user servers (RADIUS). Retrieved from http://www.faqs.org/rfcs/rfc2865.html.

Sermersheim, J. (2006). RFC4511—Lightweight directory access protocol (LDAP): the pro. Retrieved from http://www.faqs.org/rfcs/rfc4511.html.

Stokes, E., Byrne, D., Blakley, B., and Behera, P. (2000). RFC2820—Access control requirements for LDAP. Retrieved from http://www.faqs.org/rfcs/rfc2820.html.

Zeilenga, K. (2006). RFC 4510—Lightweight directory access protocol (LDAP): Technic. Retrieved from http://www.faqs.org/rfcs/rfc4510.html.

Images   Review Questions

1. Which of the following represents the type of access given to a user?

  1. Permissions

  2. Subjects

  3. Objects

  4. Rights

2. The most widely adopted access control method is

  1. Discretionary access control.

  2. Mandatory access control.

  3. Rule-based access control.

  4. Role-based access control.

3. No read up and no write down are properties of

  1. Discretionary access control.

  2. Mandatory access control.

  3. Rule-based access control.

  4. Role-based access control.

4. Access control for proprietary distributable content is best protected using

  1. Discretionary access control.

  2. Digital rights management.

  3. Distributed access control.

  4. Originator controlled.

5. When designing a system that uses least privilege, a security architect should focus on

  1. Business requirements.

  2. Organizational mission.

  3. Affected usability.

  4. Disaster recovery.

6. Separation of duties is BEST implemented using

  1. roles.

  2. permissions.

  3. rights.

  4. workflows.

7. Which of the following is the BEST supplemental control for weak separation of duties?

  1. Intrusion detection

  2. Biometrics

  3. Auditing

  4. Training

8. Centralized access control

  1. Is only implemented in network equipment.

  2. Implements authentication, authorization, and accounting.

  3. Is implemented closest to the resources it is designed to protect.

  4. Is designed to consider and accept business partner authentication tokens.

9. Firewalls typically employ

  1. Centralized access control.

  2. Decentralized access control.

  3. Federated access control.

  4. Role-based access control.

10. A feature that distinguishes decentralized from centralized access control is its

  1. audit logging.

  2. proxy capability.

  3. security kernel.

  4. shared database.

11. Federated access control

  1. is implemented with RADIUS.

  2. is designed to be mutually exclusive with single sign-on.

  3. is implemented closest to the resources it is designed to protect.

  4. is designed to consider and accept business partner authentication tokens.

12. Lightweight Directory Access Control is specified in

  1. X.509

  2. X.500

  3. RFC 4510

  4. RFC 4422

13. This technique is commonly used to collect audit logs:

  1. Polling

  2. Triggers

  3. Workflows

  4. Aggregation

14. A word processing application, governed by Discretionary Access Control (DAC), executes in the security context of the

  1. end user.

  2. process itself.

  3. administrator.

  4. system kernel.

15. Peer-to-peer applications are problematic primarily because they

  1. are prohibited by policy.

  2. may be able to access all the user’s files.

  3. are a new technology that is difficult to evaluate.

  4. may be derived from untrustworthy open source projects.

16. Business rules can BEST be enforced within a database through the use of

  1. A proxy.

  2. redundancy.

  3. views.

  4. authentication.

17. A well-designed demilitarized zone (DMZ) prevents

  1. direct access to the DMZ from the protected network.

  2. access to assets within the DMZ to unauthenticated users.

  3. insiders on the protected network from conducting attacks.

  4. uncontrolled access to the protected network from the DMZ.

18. Dual control is primarily implemented to

  1. complement resource-constrained separation of duties.

  2. distribute trust using a rigid protocol.

  3. support internal workflows.

  4. supplement least privilege.

19. A well-designed security test

  1. requires penetration testing.

  2. is documented and repeatable.

  3. relies exclusively on automated tools.

  4. foregoes the need for analysis of the results.

 

1   See the following for information on:

  1. Tripwire Open Source for Linux based systems: http://www.tripwire.org/

  2. Tripwire Enterprise for Windows based systems: http://www.tripwire.com/it-security-software/

  3. Cimtrak: http://www.fileintegritymonitoring.com/cimtrak/security

  4. splunk: http://www.splunk.com/

2   The main reasons for ORCON classification codes include the following:

  1. Alert the holder that the item requires protection

  2. Advise the holder of the level of protection

  3. Show what is classified and what is not

  4. Show how long the information requires protection

  5. Give information about the origin of the classification

  6. Provide warnings about any special security requirements

3   For a historical review of the entire program of Information Classification by the United States Government, please see the following: http://www.fas.org/sgp/crs/secrecy/RL33494.pdf (Relyea, Harold C. (2008), Security Classified and Controlled Information: History, Status, and Emerging Management Issues. Congressional Research Service Report for Congress, Order Code RL 33494.)

4   UCON based access control systems are being proposed for a variety of areas such as social networks and cloud based computing systems as well. See the following for some examples of thinking in these areas:

  1. M.G. Jaatun, G. Zhao, and C. Rong (Eds.): CloudCom 2009, LNCS 5931, pp. 559–564, 2009. © Springer-Verlag Berlin Heidelberg 2009

  2. http://www.w3.org/2010/policy-ws/papers/17-Park-Sandhu-UTSA.pdf

5   See the following for more information on SELinux:

  1. SELinux overview:

    http://selinuxproject.org/page/FAQ

    http://www.nsa.gov/research/selinux/index.shtml

    http://www.nsa.gov/research/selinux/related.shtml

  2. Research overview of proposed implementation for a MAC system using SELinux in the UK:

    http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaax/SELinux_White_Paper.pdf

6   See the following for more information on Mandatory Integrity Control (MIC) in Microsoft operating systems:

http://www.symantec.com/avcenter/reference/Windows_Vista_Security_Model_Analysis.pdf

http://blogs.technet.com/b/steriley/archive/2006/07/21/442870.aspx

http://blogs.adobe.com/asset/2010/07/introducing-adobe-reader-protected-mode.html

7   See the following for more info on Electronic Health Records and MAC systems:

http://dig.csail.mit.edu/2012/WWW-DUMW/papers/dumw2012_submission_2.pdf

http://eprints.qut.edu.au/14080/1/14080.pdf

http://eprints.qut.edu.au/10894/1/10894.pdf

8   For a discussion of the concepts inherent in a secure system architecture, and the associated parts that make it possible, see the following: http://csrc.nist.gov/publications/history/ande72.pdf

9   See the complete RFC here: http://tools.ietf.org/html/rfc2904

10   See the IETF TACAS+ memo here: http://tools.ietf.org/html/draft-grant-tacacs-02

11   See the latest IETF listing of RADIUS attribute types here: http://www.ietf.org/assignments/radius-types/radius-types.xml

12   You can see an animated explanation of Federated ID management, as described by the UK Access Management Federation for UK higher (HE) and further (FE) education here: http://www.youtube.com/watch?feature=player_embedded&v=wBHiASr-pwk#!

13   eXtensible Access Control Markup Language (XACML) defines a declarative access control policy language implemented in XML and a processing model describing how to evaluate authorization requests according to the rules defined in policies.

XACML is primarily an Attribute Based Access Control system (ABAC), where attributes (bits of data) associated with a user or action or resource are inputs into the decision of whether a given user may access a given resource in a particular way. Role-based access control (RBAC) can also be implemented in XACML as a specialization of ABAC.

See the following for the Version 3 Standard Draft for XACML: http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-cs-01-en.pdf

14   Security Assertion Markup Language 2.0 is a version of the SAML OASIS standard for exchanging authentication and authorization data between security domains. SAML 2.0 is an XML-based protocol that uses security tokens containing assertions to pass information about a principal (usually an end user) between a SAML authority, that is an identity provider, and a web service, that is a service provider. SAML 2.0 enables web-based authentication and authorization scenarios including single sign-on (SSO).

See the following for the Version 2 Standard Draft for SAML: https://www.oasis-open.org/committees/download.php/27819/sstc-saml-tech-overview-2.0-cd-02.pdf

15   Find a list of the most common Network Scanning tools and explanations about them here: http://sectools.org/

16   For an overview discussion of the conceptual framework of TBAC, please see the following: http://profsandhu.com/confrnc/ifip/i97tbac.pdf

17   For an overview of 802.1X standards support and implementation guidance on Microsoft networks see the following: http://technet.microsoft.com/en-us/network/bb545365.aspx

18   For background information on the mechanisms behind typing dynamics see the following:

Umphress, D., and Williams, G. Identity verification through keyboard characteristics, International Journal of Man-Machine Studies Vol 23, p263-273, September 1985

Monrose, F., Rubin, A., Authentication via Keystroke Dynamics, Proceedings of the 4th ACM Conference on Computer and Communications Security, p 48-56, April 1997

19   For background on Biometric system security and issues see the following: http://www.csee.wvu.edu/~ross/pubs/RossTemplateSecurity_EUSIPCO2005.pdf

20   For background on the different aspects of minutiae based matching see the following: http://www.griaulebiometrics.com/en-us/book/understanding-biometrics/types/matching/minutiae

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset