Chapter 1. Securing Windows Server 2003

Many challenges face IT administrators. One of today’s biggest tasks is securing the environment. Companies are more permissive about allowing partners to access data on their networks. At the same time, companies are stricter when it comes to securing that data and those communications. The challenge for the IT professional is to strike a balance between usability and security. Previously, Microsoft wasn’t much help in this arena. Early versions of Windows suffered from numerous security flaws that the industry was happy to advertise. With the huge number of Windows machines in use all over the world, Windows became the favorite target of hackers and griefers who knew their work would have the biggest impact if they attacked Windows.

Microsoft has made great strides to improve the security of its operating systems and applications. All software must pass rigorous tests to check for known flaws, buffer overrun susceptibility, and other potential security issues before it is released to consumers. Windows 2003 was built during the beginning of this security focus and reaped the benefits of Microsoft’s increased awareness of the need to produce secure software.

Improved Default Security in Windows 2003

To improve security in Windows Server 2003, Microsoft reduced the attack surface area of the operating System. This was done by

  • Creating stronger default policies for the file system Access Control Lists (ACL)

  • Redesigning IIS

  • Providing a systemic way to configure a server based on predefined roles

  • Reducing the total number of services

  • Reducing the number of services running by default

  • Reducing the number of services running as system

More specifically, in Windows Server 2003, Microsoft disabled 19 services and modified several services to run under lower privileges. For example, installing Windows Server 2003 does not install IIS 6 by default. You must explicitly select and install it or choose Web Server as the system role via the Configure Your Server Wizard. When a server is upgraded to Windows Server 2003, IIS 6 will be disabled by default. If IIS 6 is installed, it will default to a locked down state. After installation, IIS 6 will accept requests only for static files. It must be intentionally configured to serve dynamic content. All time-outs and settings are set to aggressive security defaults. IIS 6 can also be disabled using Windows Server 2003 group policies to prevent rogue administrators from opening unauthorized Web servers.

Windows 2003 has stronger default ACLs on the file system. This, in turn, results in stronger default ACLs on file shares. For example, the everyone group has been removed from default ACLs.

Two new user accounts have been created to run services with lower privilege levels. This helps to prevent vulnerabilities in services from being exploited to take over systems. DNS Client and all IIS Worker Processes now run under the new Network Service account. Telnet now runs under the new Local Service account.

Right out of the box, Windows 2003 is built as a secure system. The system installs only the components it needs to operate rather than installing additional services by default. Windows 2003 defaults to settings that eliminate a large number of potential security holes by not supporting legacy operating systems that are known to be less than secure. During the installation of Windows 2003 the system will warn you that it will be unable to authenticate Windows 9x clients and Windows NT 4.0 clients prior to Service Pack 3. This is because Windows 2003 sets two specific settings in the Domain Controller Security Policy:

  • Microsoft network server: Digitally sign communications (always)—Enabled

  • Network security: LAN Manager Authentication level—Send NTLM response only

Although these settings can be altered to allow the legacy operating systems to authenticate, it is not recommended to do so. This would reopen the security holes this policy is designed to close. Many administrators will remember the days when Web sites could issue LanMan (LM) requests of a host and the host would offer up the username and the LM hash of the password. The LM hash is a very weak encryption that can be broken quite rapidly via a brute force attack. Although the LM hash is stored in a non-reversible encryption, the encryption algorithm is commonly known. By having a program generate a password and apply the algorithm to it, the result can be compared to the stolen hash to see if they match. If they do, the source password is known and the system is compromised. This is exceptionally fast if the password exists in a dictionary. Going beyond the scope of Windows 2003, it is a very good idea to disable the local storage of LM hashes on all systems in the network via Group Policy Object (GPO). To define the group policy setting that limits the storage of the LM Hash Value, follow these steps:

  1. For the Group Policy object, choose Computer Configuration, Windows Settings, Security Settings, Local Policies, and then click Security Options.

  2. In the list of available policies, double-click Network Security: Do Not Store LAN Manager Hash Value on Next Password Change.

  3. Click Define This Policy Setting, choose Enabled, and then click OK.

Improvements over Windows 2000

Perhaps the single greatest improvement in security over Windows 2000 is not a technology but a procedure. Windows 2000 installed Internet Information Server by default, it installed OS2 and Posix subsystems, and it offered little insight into the implications of installing various services and applications. Windows 2003, on the other hand, introduces the Configure Your Server Wizard. This wizard launches by default when a Windows 2003 server is first built. It asks the installer what the intended role of the server is and makes the appropriate changes on the system. Files are installed, service securities are set, and the administrator can feel comfortable that the system hasn’t installed unnecessary services. This alone eliminates the largest cause of system insecurity—misconfiguration.

New Security Technologies Introduced in Windows 2003

One of the new technologies introduced in Windows 2003 is Internet Information Services 6. IIS was redesigned in Windows Server 2003 to further improve security for Web-based transactions. IIS 6 enables you to isolate an individual Web application into a self-contained Web service process. This prevents one application from disrupting other applications running on the same Web server. IIS also provides built-in monitoring capabilities to find, fix and avoid Web application failures. In IIS 6, third-party application code runs in isolated worker processes, which now use the lower-privileged Network Service logon account. Worker process isolation offers the capability to confine a Web site or application to its root directory through Access Control Lists (ACL). This further shields the system from exploits that walk the file system to try to execute scripts or other built-in code.

Windows 2003 has also improved network communication security through the support of strong authentication protocols such as 802.1x (WiFi) and Protected Extensible Authentication Protocol (PEAP). Internet Protocol Security (IPSec) support has been enhanced and further integrated into the operating system to improve LAN and WAN data encryption.

Microsoft introduced the Common Language Runtime (CLR) software engine in Windows Server 2003 to improve reliability and create a safer computing environment. CLR verifies that applications can run without error and checks security permissions to ensure that code does not perform illegal operations. CLR reduces the number of bugs and security holes caused by common programming mistakes. This results in less vulnerability for hackers to exploit.

Another technology introduction in Windows 2003 is the concept of Forest Trusts. Windows Server 2003 supports cross-forest trusts, allowing companies to better integrate with other companies that use the Active Directory. Setting up a cross-forest trust with a partner’s Active Directory enables users to securely access resources without losing the convenience of single sign-on. This feature enables you to use ACL resources with users or groups from the partner’s Active Directory. This technology is a great boon in situations where one company has acquired another. Establishing a cross-forest trust allows the two companies to immediately start sharing resources in a secured manner.

The idea of single sign-on is further improved by the introduction of Credential Manager. This technology provides a secure store for usernames and passwords as well as links to certificates and keys. This enables a consistent single sign-on experience for users. Single sign-on enables users to access resources over the network without having to repeatedly supply their security credentials.

Windows Server 2003 supports Constrained Delegation. Delegation in this context means allowing a service to impersonate a user or computer account to access resources on the network. This new feature in Windows Server 2003 enables you to limit this type of delegation to specific services or resources. For example, a service that uses delegation to access a system on behalf of a user could now be constrained such that it could only impersonate the user to connect to a single specific system and not to other machines or services on the network. This is similar in concept to the ability to limit a user to attaching to a restricted list of systems.

Protocol Transition is a technology that allows a service to convert to a Kerberos-based identity for a user without knowing the user’s password or requiring the user to authenticate via Kerberos. This enables an Internet user to authenticate using a custom authentication method and receive a Windows identity. This technology is now available in Windows 2003. This can be very useful for companies that are planning to heavily leverage Kerberos as a centralized point of authentication for both Windows and Linux systems.

Windows Server 2003 now offers .NET Passport Integration with Active Directory. This enables the use of Passport–based authentication to provide partners and customers with a single sign-on experience to Windows–based resources and applications. By leveraging .NET Passport services, companies can often reduce their cost of managing user IDs and passwords for applications with large numbers of external users. Microsoft has gone to great lengths to ensure that .NET Passport information is stored as securely as possible to foster confidence in the industry and help grow the technology.

Although Windows 2000 supported encrypted folders, Windows Server 2003 now allows offline files and folders to be encrypted using EFS as well. Offline Files, or client-side caching, was introduced in Windows 2000 and allows mobile users to work with a local copy of a file while disconnected from the network. When the user reconnects to the server, the system reconciles the changes with the older versions of the documents on the server. This allows files to continue to be protected when cached locally on a mobile computer.

Stronger encryption technologies for EFS are available in Windows 2003. Windows Server 2003 now supports encryption for EFS that is stronger than the default Data Encryption Standard (DESX) algorithm. By default EFS will use the Advanced Encryption Standard (AES-256) for all encrypted files. Clients can also use Federal Information Processing Standards (FIPS) 140-1 compliant algorithms, such as the 3DES algorithm, which is also included with Windows XP Professional.

Securing the Hatches

Today, the whole world is looking at security. As the world becomes more information connected, issues of information privacy are on everyone’s mind. Several government mandates have been issued in the area of securing identity information for medical histories and for information regarding children.

Commerce across networks in the arena of business to business and business to consumer have all raised questions about whether credit card information is being stored securely or whether online transactions are safe. These issues have spawned many technologies and the corporate world has adopted many of these for internal security. As access to information becomes easier and easier, it is more and more critical to ensure that data and data transmissions are protected.

Security and Company Reputation

In today’s market, if a company is to keep its customers, the customers must have faith in the company. Large online retailers are dependant on the confidence of their customers in their security in order to continue doing business. So long as customers feel secure that their financial information is being transmitted and stored securely, they will continue to do business with a company online.

If one of these online retailers was to become compromised and information such as credit card numbers was stolen, it could potentially destroy the company. The reputation of the company is tied directly to its security. Failure to be diligent in securing the hatches of a company can quickly lead to its downfall.

Implementing Transport Layer Security

The concept of Transport Layer Security (TLS) is that conversations between networked devices should be held in a manner such that any other device that might have intercepted the communications will be unable to use the information. TLS, which is similar to SSL, is based on an x.509 certificate which must be published from a trusted Certificate Authority (CA). TLS can do the following:

  • Detect message tampering

  • Detect message interception

  • Detect message forgery

To use TLS for client/server communication, the following steps are used:

  1. Handshake and cipher suite negotiation.

  2. Authentication of parties.

  3. Key-related information exchange.

  4. Application data exchange.

By default TLS will accept any cipher; this can be locked down further by GPO to limit the cipher choices through modification of the following Registry key:

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNEL Ciphers

You will find multiple cipher choices listed and can enable or disable them as appropriate.

The TLS Handshake Protocol involves the following steps:

  1. A “client hello” is sent from the client machine to the server, along with a random value and a list of supported cipher suites.

  2. A “server hello” is sent in reply to the client along with the server’s random value.

  3. The server sends its certificate to the client to be authenticated and it might request a certificate from the client as well. This results in a “Server hello done” message. The client sends the certificate if it was requested by the server.

  4. The client then creates a random Pre-Master Secret and encrypts it via the public key from the server’s certificate. This encrypted Pre-Master Secret is then sent to the server.

  5. Upon receipt of the Pre-Master Secret, the server and client each generate the session keys and Master Secret based on the Pre-Master Secret.

  6. The client sends a “Change cipher spec” message to the server to indicate that the client will begin using the new session keys for encrypting and hashing messages. The client also sends a “Client finished” message.

  7. The server receives the “Change cipher spec” message and switches its record layer security state to use symmetric encryption based on the session keys. The server sends a “Server finished” message to the client.

  8. The client and server can now exchange data over the secured channel that they have established. All data and communications sent from client to server and from server to client are encrypted using the session key.

Requiring Digital Signing

Older implementations of Small Message Block (SMB) communications were susceptible to what is known as a man-in-the-middle attack. A man-in-the-middle attack occurs when an attacker masquerading as one of the legitimate parties inserts messages into the communications channel. This allows the attacker to send its own credentials and causes the other host to accept its connection. By placing a digital signature into each SMB, which is verified by both the server and the client, there is a mutual authentication that verifies the validity of both the server and the client. If this security setting is enabled on a server, the clients must support digital signing of communications or they will be unable to communicate with the server.

This can be configured in the Default Domain Controller Security Settings under Security Settings/Local Policies/Security Options/Microsoft Network Server: Digitally Sign Communications (always)—Enabled.

Leveraging PKI

Not surprisingly, certificate-based technologies require access to certificates. Specifically, certificates that have been issued by a trusted Certificate Authority. Companies have the option of using an external trusted Certificate Authority such as Verisign, SecureNet, or Globalsign. One advantage of using one of these external Certificate Authorities is that Internet Explorer comes preloaded with these as trusted root authorities. This means that clients won’t have to contact those root CAs and prompt the user to accept the certificate. The other option is for a company to establish its own Certificate Authority. This could be a root CA or an Enterprise CA that was built based on a certificate provided by another Root CA.

If a company is going to issue its own certificates, client machines can be preloaded with the certificate via GPO settings. For example, if a company will be using digital certificates in their intranet, they might push a server certificate to the clients to define the server as an Intermediate Certification Authority. To push a certificate to clients, do the following:

  1. Launch the GPO editor.

  2. Choose User Configuration, Windows Settings, Internet Explorer Maintenance, Security, Authenticode Settings.

  3. Choose Import Existing Authenticode Settings.

  4. Click Modify Settings.

  5. Click Import. This launches the Import Certificates Wizard.

  6. Click Next.

  7. Click Browse, and then browse to the certificate file. Choose Open and then click Next.

  8. Choose Browse and select the appropriate certificate store.

  9. Choose Next and then click Finish.

The Wizard will inform you that you are about to install a certificate claiming to be from a particular source. If this information is valid, select “yes”. The Wizard will inform you that the certificate was successfully installed.

Installing Certificate Services

Installing certificate services in Windows Server 2003 requires taking a Windows 2003 server and adding the Certificate Services component on the server. The process of adding Certificate Services to a Windows 2003 is as follows:

  1. Choose Start, Control Panel, Add or Remove Programs.

  2. Click Add/Remove Windows Components.

  3. Check the Certificate Services box.

  4. A warning dialog box will be displayed, as illustrated in Figure 1.1, indicating that the computer name or domain name cannot be changed after you install Certificate Services. Click Yes to proceed with the installation.

    Certificate Services warning.

    Figure 1.1. Certificate Services warning.

  5. Click Next to continue.

  6. The following screen, shown in Figure 1.2, enables you to create the type of CA required. In this example, choose Enterprise Root CA and click Next to continue.

    Selecting the type of CA server to install.

    Figure 1.2. Selecting the type of CA server to install.

  7. Enter a common name for the CA—for example, CompanyABC Enterprise Root CA.

  8. Enter the validity period for the Certificate Authority and click Next to continue. The cryptographic key will then be created.

  9. Enter a location for the certificate database and then the database logs. The location you choose should be secure, to prevent unauthorized tampering with the CA. Click Next to continue. Setup will then install the CA components.

  10. If IIS is not installed, a prompt will be displayed, shown in Figure 1.3, indicating that Web Enrollment will be disabled until you install IIS. If this box is displayed, click OK to continue.

    IIS warning in the CA installation procedure.

    Figure 1.3. IIS warning in the CA installation procedure.

  11. Click Finish after installation to complete the process.

If IIS Is Installed on the Server, a Dialog Box Will Appear

If IIS is installed on the server, a dialog box will appear noting that the IIS services will be temporarily stopped. When prompted whether it is okay to stop and restart the IIS service, choose Yes unless the service is actively in use at the time of certificate services installation.

Importance of Physical Security

Network security is essentially useless if the servers involved aren’t physically secured. Computers don’t know the difference between a local break-in and a legitimate password recovery. Although security information is stored in the Active Directory, the Active Directory still consists of a database stored on servers. This information is laid out in a specific structure and a person with physical access to a hard drive that contains an NTDS.DIT file, a sector editor, and sufficient knowledge can compromise the security database.

Servers should always be located in locked data centers. Access to these data centers should be limited and audited. Security logs [%systemroot%System32configSecEvent.Evt] should be duplicated in a separate location to prevent tampering. Applications like Microsoft Operations Manager, which centralize management of event logs, are useful for this task. Implementation of a syslog server will also work well for this.

Access to secured data centers should require multiple forms of authentication. For example, rather than just rely on a badge reader, the lock might consist of a combination of a badge reader and a PIN code that must be entered. This way, theft of a badge would not be enough to compromise the data center.

Know Who is Connected Using Two-factor Authentication

Usernames and passwords have long been the standard for user authentication. Windows NT improved on this concept by adding a machine account that was needed to log into a domain. Although this was good for domain logins it could be bypassed to attach to network resources via pass-through authentication. Many companies need stronger methods of authentication. This is especially critical when dealing with remote users. Modem pools and VPN devices are relatively easy to find. Malicious hackers can spend time trying to get through these devices with relative impunity. This concern is addressed by the concept of two-factor authentication such as smartcards and biometric authentication.

Utilizing Smartcards

A smartcard is a portable programmable device containing an integrated circuit that stores and processes information. Smartcards traditionally take the form of a device the size of a credit card that is placed into a reader but they can also be USB-based devices or integrated into employee badges. Windows 2003 and Windows XP have native support for smartcards as an authentication method. Smartcards are combined with a PIN, which can be thought of as a password, to provide two-factor authentication. Physical possession of the smartcard and knowledge of the PIN must be combined to successfully authenticate.

To use a smartcard, a domain user must have a smart card certificate. The administrator must prepare a Certificate Authority (CA) to issue smart card certificates before the CA can issue them. The CA needs both the Smart Card Logon and Enrollment Agent certificate templates installed. If smart card certificates for secure e-mail messages are to be used, the administrator must also install the Smart Card User certificate template.

To configure a Windows-based Enterprise CA to Issue Smart Card Certificates, follow these steps:

  1. Log on to an Enterprise CA. Be sure to use a domain administrator account.

  2. From the Start menu choose Programs, Administrative Tools, Certification Authority.

  3. In the Certification Authority console, expand your domain, right-click the Certificate Template container, and select New, Certificate Template to Issue.

  4. In the Enable Certificate Template dialog box, select Smartcard User, and then click OK.

  5. Right-click on the Certificate Template container, and click Manage. This will open up the Certificate Templates MMC.

  6. In Select Certificate Template MMC, right-click on the Smartcard User and select Properties.

  7. Click on the Security tab. Click on the Add button and choose the group for which you want to add smartcard access (in this example, a Smartcard Users group whose members are employees with smartcards was added to Active Directory).

  8. Select Read and Enroll for Permissions as shown in Figure 1.4, and then click OK.

    Adding a group for smartcard logon authentication.

    Figure 1.4. Adding a group for smartcard logon authentication.

Leveraging Biometrics to Enhance Security

Biometrics refers to unique biological information that can be used to determine the identity of a user. This, combined with a name/password authentication, provides a two-factor authentication that cannot be duplicated. Thumbprints, bone density, and retinal patterns are all commonly used with biometric security.

Third-party biometric solutions leverage proprietary authentication mechanisms to work in tandem with existing authentication protocols in network operating systems. Technologies like retinal scanners are usually standalone devices whereas items like fingerprint readers can integrate into the user’s keyboard.

Using Templates to Improve Usage and Management

One of the biggest keys to effective security is the standardization of the application of security policies across the environment. Windows 2003 continues to support this concept with the use of the Security Configuration and Analysis MMC plug-in. This plug-in enables you to convert your own security policies into a template file that can be applied to other servers. This ensures that servers are configured identically. This can be exceptionally useful for systems configured to sit outside a firewall that are not members of an Active Directory domain and thus aren’t managed by Group Policy Objects.

Using the Security Configuration and Analysis Tool

The Security Configuration and Analysis tool, which is available in Windows 2003 from the MMC Snap-in, is designed to read specific security information from a server and compare it to a template file. This enables you to create standard templates and see whether servers in their environment conform to those settings.

To perform an analysis of a system, do the following:

  1. Select Start, Run, mmc.exe and then click OK to launch the MMC snap-in.

  2. Add the Security Configuration and Analysis snap-in.

  3. Right-click the Security Configuration and Analysis scope item, and choose Open Database.

  4. Choose a database name and then click Open.

  5. Pick a security template, and then open it.

  6. Right-click the Security Configuration and Analysis scope item and choose Analyze Computer Now, then click OK.

The system will display all local security settings and show the template recommendation from the database. By comparing local settings to a standard template created by the administrator, settings can be made consistent without steamrollering any required local security settings.

Leveraging Secure Templates

Groups such as the National Security Agency or the National Institute of Standards and Technology have built what they consider to be secure templates for such roles as Domain Controller, Web Server, Application Server, and others. By using these templates as a starting point, you can build customized templates that take NIST or NSA guidelines into account. This makes it much easier to build a secure template as these groups specialized in knowing and understanding computer security.

Patrolling the Configuration

After you have gone through a server and locked it down to your satisfaction, it is important to audit those settings against third-party tools to ensure that nothing was missed. It’s also valuable to know that your network meets the standards of a well-recognized security entity such as the NSA or NIST.

With requirements like the Health Insurance Portability and Accountability Act (HIPAA) or Graham Leach Bliley Act (GLBA), many companies are now required to provide documentation to prove that they have taken the necessary steps to secure the sensitive information on their networks. Third-party analysis tools provide an objective and impartial assessment of network security. Although some assessment technologies are very thorough, they are no replacement for an audit by a reputable company that specializes in security audits.

Auditing the System Security

The event log is an excellent way to track activity on a server. The local security policy allows you to enable or disable various auditing events, which are explained in the following list:

  • Audit account logon events. This setting audits each instance of a user logging on to or off of another computer in which this computer was used to validate the account. Account logon events are generated when a domain controller authenticates a domain user. The event is logged in the domain controller’s security log. Similarly, logon events are generated when a local computer authenticates on a local user. In this case, the event is logged in the local security log.

  • Audit account management. This setting audits each instance that a user account or group is created, changed, or deleted. It also generates events when a user account is renamed, disabled, or enabled. Password setting or changing is audited as well.

  • Audit directory service access. This security setting audits each event of a user accessing an Active Directory object that has its own system access control list (SACL) specified.

  • Audit object access. This security setting audits the event of a user accessing an object, such as a file, folder, Registry key, or printer that has its own system access control list (SACL) specified.

  • Audit policy change. This setting audits incidences of a change to user rights assignment policies, audit policies, or trust policies.

  • Audit privilege use. This setting audits each instance of a user exercising a user right.

  • Audit process tracking. This setting audits detailed tracking information for events such as program activation, process exit, handle duplication, and indirect object access.

  • Audit system events. This setting audits when a user restarts or shuts down the computer or when an event occurs that affects either the system security or the security log.

Using the Microsoft Baseline Security Analyzer

The Microsoft Baseline Security Analyzer is a tool designed to determine which critical security updates are currently applied to a system. MBSA performs this task by referring to an XML (Extensible Markup Language) file called mssecure.xml. This file is continuously updated by Microsoft to account for all current critical fixes. By leveraging the HFNetChk tool technology MBSA is able to audit the target system against the current list of fixes. This XML file holds information about which security updates are available for particular Microsoft products beyond just the operating system. This file holds the names and titles of security bulletins as well as detailed information about product-specific security updates, including the following:

  • Files in each update package

  • Versions and checksums

  • Registry keys that were applied by applications

  • Information about which updates supersede others

  • Related Microsoft Knowledge Base article numbers

MBSA and the Latest Copy of the mssecure.xml File

MBSA attaches to the Microsoft Web site to pull the latest copy of the mssecure.xml file. If a machine that will run MBSA does not have Internet access, you can download the XML file and place it on the machine running MBSA. Remember to update this file regularly to be aware of critical updates on the target systems.

Using Vulnerability Scanners

One of the most valuable methods for checking the security of a server is through the use of a vulnerability scanner. Vulnerability scanners are based on a constantly updated database of known security flaws in operating systems and applications. The scanner attaches to the target system on various ports and sends requests to the system. Based on the responses, the scanner checks its database to determine whether the conditions for an exploit exist on the system. These potential exploits are then compiled into a report and are presented to the administrator. Most vulnerability scanners have the option to do intrusive testing as well. Great care should be taken when performing intrusive testing. Intrusive testing is the only way to truly validate the results of the vulnerability testing. For example, a report item might list that a particular script is marked as executable via the Web services and that it could be exploited to cause the Web server to crash if it is passed a parameter containing a nonstandard character. You might believe that this is a false positive because you have assigned NTFS permissions to the script to only allow a single account to access the script; that account is one that you control and it can never send a nonstandard character. The only way to be sure would be to have the scanner attempt to exploit that apparent flaw. This would need to be done during a maintenance window in case the exploit succeeded. Only after performing this validation could you be certain that the vulnerability was not actually present.

Some popular vulnerability scanners are as follows:

  • Microsoft Security Baseline Analyzer

  • Internet Security Systems: RealSecure

  • Nessus

  • GFI LANguard Network Security Scanner

  • Cerberus Internet Scanner (CIS)

Auditing the File System

Windows 2003 possesses built-in mechanisms to audit file system access. This enables you to see who is accessing or attempting to access specific files and when the access occurs. This type of auditing is only supported on NTFS drives. To audit this type of access, you must first enable the Object Access Auditing. This feature can be enabled via the Default Domain Security Settings, under Local Policies/Audit Policy/ Audit Object Access. As shown in Figure 1.5, set this to audit both Success and Failure in order to track both types of events. Follow these steps to audit access of a particular file or folder:

Setting the security auditing function.

Figure 1.5. Setting the security auditing function.

  1. Navigate via Explorer to the file or folder in question.

  2. Right-click the item to be audited, click Properties, and then Security.

  3. Click Advanced, and then click the Auditing button. Click Add. Enter the name of the group or user of which you would like to track the actions and click OK.

  4. In the Apply Onto drop-down box, choose the location where you want the auditing to take place.

  5. In the Access box, indicate what successes and failures you want to audit by selecting the appropriate check boxes.

  6. Click OK, OK, and OK to exit when done.

Managing the Auditing on a Server

The role of managing the auditing on a server can be delegated to a non–administrator account by granting the Manage Auditing and Security Log rights via Group Policy. This enables you to delegate this role without giving full administrator rights. This can be especially useful for remote site administrators who only manage a subset of servers and users.

Securing the File System

Windows 2003 stores all its data in the file system. User data, application data, and operating system files all live in the file system. To secure Windows 2003, these files need to be secured. Threats from outside the network, accidental deletion of system files, or access from an unauthorized internal group can all result in the loss of data or the compromising of confidential data. Windows 2003 supports many mechanisms to secure the file system.

Locking Down the File System via NTFS

Way back in Windows NT 3.1 Microsoft introduced the NT File System (NTFS). NTFS was a great breakthrough over the FAT file system in many areas. Support for larger drives, support for nonstandard block allocation sizes, and the ability to define security on a file or folder level all gave NTFS a big advantage over FAT. The ability to secure files and folders individually via NTFS permissions is the basis of Windows 2003 as a securable file server.

Windows 2003 has made great strides in the area of default system NTFS permission on the file system. Windows no longer defaults to having the everyone group listed for all resources. Instead, it defaults to allowing authenticated users the ability to read and list files and folders. By default, Windows 2003 will allow authenticated users to bypass traverse checking. This works hand in hand with the upgrades to client operating systems like Windows 2000 Professional and Windows XP Professional that now allow drives to be mapped at a point below the share point. So although a share might exist that looks like \Serverusers$ with departmental directories with hundreds of user directories below them, a user can now be mapped directly to his own directory without having to share the user directory explicitly and without having to grant the user rights to anything other then his own directory. Although the user might not be able to read or list the departmental directories it is unnecessary if the goal is only to give him access to his own home directory. This greatly simplifies the application of NTFS permissions.

Locking Down Group Membership

One of the most important ways to keep a network secured is to ensure that users are not granted membership to groups that provide more rights than they really need. Similarly it is critical not to fall into the trap of simply making all administrators domain administrators just to ensure that they have sufficient rights to perform their daily duties. Windows 2003 has continued to make great strides in the area of granularity when it comes to assigning rights to administrators.

The area of group membership that is often overlooked by administrators is the local administrative groups on member servers and on workstations. Because these groups aren’t centrally managed, it is easy to forget that they are out there. Administrators often add user’s domain accounts into their local administrator group so that they can work on installing a new software package but often forget to remove that membership after the project is finished. This results in a number of users having elevated rights on their own workstations. This puts them at risk of unwittingly installing spyware or other applications that could put the network at risk.

One way to control membership of these local groups is through the application of Group Policy Objects. By defining the Administrators group as a Restricted Group you can define what accounts are allowed to be present in that group. If a local administrator adds an additional account the change will not be persistent. This enables you to easily control group memberships across the network. This parameter is found in Computer Configuration/Windows Settings/Security Settings/ Restricted Groups. Simply add the group you want to restrict and add the members that are allowed to be present.

Keeping Users Out of Critical File Areas

Operating system files are the lifeblood of the operating system. Corruptions or deletions of these files can quickly cripple a server. Aside from application of security patches, there is no reason for an administrator to need write access to system files. Having the ability to do so only makes the administrator a threat to the stability of the system. Accidental file deletion or renaming through either operator error or malicious scripts can be prevented by locking down access to the system files. Allow the Administrator account to retain Full Control of the files in the %systemroot% directory but don’t allow general administrative groups to have rights to these files. Discourage administrators from logging on to systems as Administrator. Instead have them use their normal account and use the “run as” feature if they need to run a program with elevated rights.

Securing Web Services

Web Servers are one of the most common implementations of Windows 2003 and due to their role of serving users outside the domain they are especially vulnerable and need to be well-secured. New Web-related exploits are found practically daily and if Web servers are to remain secure, they must be up-to-date on available patches for the operating system as well as for the Web services.

Using SSL

One of the biggest concerns with Web servers is making sure that secure conversations are not intercepted via packet sniffing. Because the Internet is a pretty nebulous cloud with questionable security it is up to you to ensure that end-to-end communications with end users are secure. One of the most common ways to do this is with Secure Socket Layer communications. SSL runs above TCP/IP and below HTTP. SSL performs three primary functions:

Bandwidth Usage and SSL

The use of SSL does not affect bandwidth usage. It does, however, place an additional CPU load on both the client and the server. If an existing Web application is going to be switched to SSL communications, the overall capacity of the system will be reduced. This overhead can be mitigated on the server via the use of hardware-based SSL accelerators.

  • SSL server authentication. This allows a client to validate a server’s identity. SSL-enabled client software can use public-key cryptography to check to see if a server’s certificate and public ID are valid. It can also check to see if the certificate has been issued by a certificate authority (CA) listed in the client’s list of trusted CAs. If, for example, a user were sending a credit card number over the Internet to make a purchase, he would want to verify the receiving server’s identity.

  • SSL client authentication. This allows a server to validate a user’s identity. Using a similar technique as that used for server authentication, SSL-enabled server software can validate that a client’s certificate and public ID are valid. It can also check to see that a trusted certificate authority issued them. If, for example, an online retailer wanted to send confidential information to a customer, it would want to verify the recipient’s identity.

  • Encrypting SSL connections. SSL requires that all information sent between a server and a client be encrypted by the sending software and decrypted by the receiving software. This provides a high degree of confidentiality and security. SSL includes a mechanism for detecting data that was tampered with. This further protects transactions performed over SSL connections.

Scanning the Web Servers for Vulnerabilities

Web servers are especially vulnerable to attack by hackers and griefers. By their very nature Web servers are often open to anonymous access and are located in lightly secured networks. Web servers are very popular targets and as such, vulnerabilities are regularly found in Web services. In order for you to secure systems against these vulnerabilities, you must be aware of them. The easiest way to do this is to scan the Web servers for vulnerabilities regularly.

Many companies offer services specifically designed to regularly scan Web servers for other companies and provide them with reports on discovered vulnerabilities. This is an excellent option for companies that lack the resources or expertise to perform these scans in-house.

Keeping up with Patches

Keeping up with patches is absolutely critical for the security of Web servers. The vast majority of the critical fixes produced for Windows are based on flaws discovered on Web servers. This isn’t so much because Web services are inherently insecure but because there are so many Windows-based Web servers on the Internet; they can’t help but provide a tempting target for hackers. Microsoft has entire teams of engineers and software developers that are dedicated to solving these vulnerabilities as soon as they are discovered. Their job is to get these hotfixes out to administrators. The easiest way to manage these patches is with the Software Update Service. The SUS server allows you to point all of your Web servers to a single server for downloading patches. You need only to check the logs on the SUS server to see if new patches are available. You can test these patches in a lab environment and then approve a patch for distribution. At that point, the Web servers that are configured to point to the SUS server will automatically install the patches and optionally reboot themselves.

Patches and Automatic Rebooting

If servers are configured to reboot automatically after patches that request a reboot, you could face a situation where all of the load-balanced Web servers reboot themselves at the same time. This could result in several minutes of downtime for the site depending on how long the servers take to reboot.

Locking Down IIS

Windows 2003 IIS (version 6.0) surpasses its predecessor by integrating many of the features of the old IIS Lockdown Tool. The IIS Lockdown Tool worked by disabling unnecessary features within IIS based on the planned role of the server. This served to reduce the potential points of attack available to hackers. This was layered with URLScan, a utility that intercepted input from client machines and ran it through an internal check to determine if it was trying to send malicious data such as out-of-band characters or scripts. By default, IIS 6 installs with just the features needed to fill its defined role. It is able to specify exactly what ISAPI and CGI code is allowed to run on the server and has default behaviors for handling HTTP verbs and headers that are designed to execute WebDAV. IIS 6 maintains a UrlScan.ini file with a specific section for DenyUrlSequences. This replaces some of the features of URLScan. Similarly IIS 6 has a mechanism for limiting the length of fields and requests. This plugs many of the older IIS vulnerabilities. If these settings are too restrictive for a specific Web application, the parameters can be modified via Registry settings:

  • HKEY_LOCAL_MACHINESystemCurrentControlSetServicesHTTPParametersAllowRestrictedChars

  • HKEY_LOCAL_MACHINESystemCurrentControlSetServicesHTTPParametersMaxFieldLength

  • HKEY_LOCAL_MACHINESystemCurrentControlSetServicesHTTPParametersUrlSegmentMaxLength

  • HKEY_LOCAL_MACHINESystemCurrentControlSetServicesHTTPParametersUrlSegmentMaxCount

Although URLScan 2.5 will run on IIS 6, most administrators will find it unnecessary because most of the security features in IIS 6 are better than those in URLScan 2.5. URLScan 2.5 is highly recommended for use with older IIS 5.0 servers.

Keeping Files Confidential with EFS

Windows 2003 supports the Encrypting File System on NTFS volumes. EFS enables a user to encrypt a file such that only he can access it. When a user initially uses EFS to encrypt a file, the user is assigned a key pair (public key and private key). This is either generated by the certificate services or self-signed by EFS depending on whether there is a CA present. The public key is used for encryption and the private key is used for decryption.

When the user encrypts a file, a random number called the File Encryption Key (FEK) is assigned to the file. The DES or DESX algorithm is used to encrypt the file with the FEK as the secret key. The FEK is also encrypted with the public key using the RSA algorithm. In this way, a large file can be encrypted using relatively fast secret key cryptography while the FEK is encrypted with slower but more secure public key cryptography. This provides a high level of security with less impact on overall performance.

Leveraging Standalone EFS

Windows 2000, 2003, and XP all support EFS. They are capable of generating their own key pair for EFS if a domain level certificate is unavailable to them. EFS on a machine that does not belong to a domain has a number of differences than a machine that is a domain member. For example, Windows 2000 has a default setting that allows any local Administrator account to decrypt any user’s encrypted files on that machine. This makes machines vulnerable to sector editor attacks as the local password store can be compromised. Windows XP and Windows Server 2003 do not have this behavior.

If a user encrypts a file and loses the certificate store of both the user and the local DRA, it will be impossible to decrypt the files. Similarly, due to the lack of a central key database for non-domain EFS users, a user could intentionally delete the DRA certificate and the certificate store and render the files unrecoverable.

When using EFS in a non–Active Directory environment, some key best practices should be followed to simplify management and improve local security:

  • Windows 2000 computers should have the default DRA private key removed stored separately from the system.

  • Use a SYSKEY mode with a boot floppy or master password that must be entered prior to system boot. The floppy or master password should be stored separately from the system. This helps protect the local account store from attack.

  • Build machines using sysprep and custom scripts to configure a central recovery agent. This can be achieved via a run-once Registry key that removes the existing local DRA and inserts a centralized DRA. This change must be performed after the sysprep mini-setup that generates the default DRA. The preferred practice is to use a Microsoft CA to issue a DRA certificate for the central recovery agent.

Existing Encrypted Files and Utilizing the Default Domain Recovery Agent

If a machine or user with standalone EFS is migrated to an Active Directory environment with an enterprise CA, clients will continue to use the self-signed certificates. They will not automatically enroll for a new certificate once they are joined. However, the default domain recovery agent will take effect for all new files. Existing encrypted files will utilize the default domain recovery agent once they are modified.

Common Pitfalls with Encrypted File System Implementations

One of the easiest traps to fall into with Encrypted File System (EFS) is to allow users to enable EFS on their own machine without access to a domain wide recovery agent. Clients will create their own key pair and administrators might not have the capability to recovery encrypted files if the user loses the local key pair. Active Directory allows you to prevent users from enabling EFS until such time that a proper CA has been put in place to enable managed EFS on the clients. The easiest way to do this is via GPMC:

  1. Open the Group Policy Management Console MMC snap-in.

  2. Navigate to the appropriate container where the GPO should be applied.

  3. Right-click on the GPO and select Edit as shown in Figure 1.6.

    Editing the GPO in Group Policy Management Console.

    Figure 1.6. Editing the GPO in Group Policy Management Console.

  4. Navigate to Computer ConfigurationWindows SettingsSecurity SettingsPublic Key Policies.

  5. Right-click the folder named Encrypting File System.

  6. Click Properties.

  7. Uncheck the box marked Allow Users to Encrypt Files Use Encrypting File System

  8. Click OK.

Bulletproof Scenario

CompanyABC is a small software company with offices all over the world. CompanyABC supports roaming salespeople who travel from office to office. CompanyABC prides itself on its ability to make resources available to the end users. CompanyABC has security policies in place that require encrypting all data on databases and file servers as well as all communications between computers. CompanyABC requires strong authentication for access to any and all systems.

Bob is an employee at CompanyABC. Bob works in sales travels often. He needs near constant access to contact databases and e-mail and has a fancy new notebook with a wireless network interface and Windows XP.

This section will take a look at a typical day for Bob and highlight the security features that enable Bob to perform his daily tasks in a secure manner.

Bob has just arrived in a remote office and needs to access a document that he has stored on a file server back at the corporate headquarters. Bob has been given access to a conference room to use as a temporary office. Bob boots up his notebook and is prompted to enter his smartcard. Bob places his smartcard-enabled employee badge into the smartcard reader in his notebook and is asked for a PIN. Bob enters his PIN and is authenticated to the notebook. As Bob’s notebook launches Windows XP, it sends a DHCP request via the wireless network interface. Along with this DHCP request, Bob’s system sends a ClassID that was configured on his system when it was first imaged. Luckily for Bob, the MAC address of his wireless card was entered into a RADIUS server that all of the wireless access points use to authenticate users at a hardware level. This allows the access point to process Bob’s DHCP request. The request reaches a DHCP server located on an isolated network in the office. This Network site behind a firewall only allows VPN traffic to reach a specific pair of load-balanced VPN servers. Because the ClassID on Bob’s machine matches the ClassID on a scope on the DHCP server, Bob’s machine is given a valid IP address.

Bob launches his VPN connection and attaches to the local VPN server. Bob now has an L2TP connection secured with IPSec to the office network. At this point, a domain login prompt appears and Bob authenticates himself to the network via his Active Directory login. Pleased with his progress, Bob decides to reward himself with a nice cup of coffee. Knowing that the kitchen requires badge access to enter, Bob removes his employee badge from the notebook and walks to the kitchen. By removing his badge, the smartcard driver tells the system to lock itself.

This is a behavior that is configured on the notebook. While Bob is away, other users cannot gain access to his notebook. Bob returns shortly and unlocks his notebook via the smartcard and PIN combination. Because Bob has access to the corporate network, he decides to access his document on the server back at HQ.

When Bob’s notebook requests the file from the server, the server informs Bob’s notebook that it requires Transport Layer Security to access the resources. Bob’s notebook and the server exchange certificates and random values and create a pre-master secret. This secret is used to generate their session keys. These session keys are used to encrypt the communications. When Bob’s notebook told the server which ciphers it supported it informed the server that it only supports Microsoft Enhanced DSS and Diffie-Hellman SChannel Cryptographic Provider, which is the way the notebook was configured when it was first imaged. The server accepts this cipher and the channel is established.

The document Bob wants is sitting in his personal folder. This folder is encrypted via EFS based on a certificate that was issued to Bob by the corporate Certificate Authority. Because Bob’s notebook possesses the correct key, he is able to decrypt the file to view it. Bob has also enabled several coworkers to decrypt the file so that it can be shared.

Bob, being just computer savvy enough to be dangerous, decides that this is just too much effort to get to a single file. Knowing that he is going to need to access this file again the next day at another office, Bob decides that he is going to create a local cached copy of the file through offline folders. Luckily, a Windows XP client with a Windows 2003 backend allows Bob’s offline copy to remain encrypted. Now when Bob loses his notebook at the airport again, the company doesn’t have to worry about a loss of intellectual property.

Summary

In this chapter, you learned that Microsoft has taken great steps toward making communications and storage more secure. Windows 2003 represents the latest efforts of Microsoft in this area. You saw the importance of taking a layered approach to security. No single technology in Windows 2003 is the end-all, be-all of security. These technologies work together to help secure the intellectual property of the company both locally and abroad.

We’ve seen that although securing a system is very important, it’s equally important to audit and test that security constantly. Monitoring activities in a network and recognizing signs of attack are critical in protecting a network.

Technologies like smartcards and biometrics help to strengthen the authentication process of Windows. Technologies like TLS and SSL help ensure that once authenticated, transmissions are still performed in a secure manner.

Files can be encrypted and still be shared among controlled lists of users. By storing files in an encrypted manner, systems become less vulnerable to data theft as a result of hardware theft. This enables companies to allow more access to their data without reducing the overall security of the environment.

It is always important to realize that today’s “unbreakable” security will become tomorrow’s plaything. 128-bit encryptions that used to take years to crack can be cracked in mere seconds by today’s more powerful computers. Security is a commitment, an ongoing process that must be constantly monitored and maintained. It must grow as a company grows in order to remain useful.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset