Chapter 1. Introducing Windows Server 2012 R2

Getting to know Windows Server 2012 R2

Windows 8.1 and Windows Server 2012 R2

Planning for Windows Server 2012 R2

Thinking about server roles and Active Directory

Planning for availability, scalability, and manageability

Windows Server 2012 R2 is the most powerful, versatile, and fully featured server operating system from Microsoft yet. If you’ve been using Windows Server operating systems for a while, I think you’ll be impressed. Why? For starters, Windows Server 2012 R2 includes a significantly enhanced operating system kernel, the NT 6.3 kernel. Because Windows 8.1 uses this kernel also, the two operating systems share a common code base and many common features, enabling you to apply readily what you know about Windows 8.1 to Windows Server 2012 R2.

In Windows Server 2012 R2, Microsoft delivers a server operating system that is something more than the sum of its parts. It isn’t just a server operating system or a network operating system. It is a best-of-class operating system with the foundation technologies necessary to provide networking, application, web, and cloud-based services that can be used anywhere within your organization. From top to bottom, Windows Server 2012 R2 is dramatically different from earlier releases of Windows Server operating systems—so much so that it also has an entirely new interface.

The way you approach Windows Server 2012 R2 will depend on your background and your implementation plans. If you are moving to Windows Server 2012 R2 from an early Windows server operating system or switching from UNIX, you’ll find that Windows Server 2012 R2 is a significant change that requires a whole new way of thinking about the networking, application services, and interoperations between clients and servers. The learning curve will be steep, but you will find clear transition paths to Windows Server 2012 R2. You will also find that Windows Server 2012 R2 has an extensive command-line interface that makes it easier to manage servers, workstations, and, indeed, the entire network, using both graphical and command-line administration tools.

If you are moving from Windows Server 2008 or Windows Server 2008 R2 to Windows Server 2012 R2, you’ll find the changes are no less significant but are easier to understand. You are already familiar with the core technologies and administration techniques. Your learning curve might still be steep, but in only some areas, not all of them.

You can also adopt Windows Server 2012 R2 incrementally. For example, you might add Windows Server 2012 R2 Print And Document Services and Windows Server 2012 R2 File And Storage Services to enable the organization to take advantage of the latest enhancements and capabilities without implementing a full transition of existing servers. In most but not all cases, incremental adoption has little or no impact on the network while allowing the organization to test new technologies and roll out features incrementally to users as part of a standard continuance or upgrade process.

Regardless of your deployment plans and whether you are reading this book to prepare for implementation of Windows Server 2012 R2 or to manage existing implementations, my mission in this book is to help you take full advantage of all the features in Windows Server 2012 R2. You will find the detailed inside information you need to get up to speed quickly with Windows Server 2012 R2 changes and technologies; to make the right setup and configuration choices the first time; and to work around the rough edges, annoyances, and faults of this complex operating system. If the default settings are less than optimal, I show you how to fix them so that things work the way you want them to work. If something doesn’t function like it should, I let you know, and I show you the fastest, surest way to work around the issue. You’ll find plenty of hacks and secrets, too.

To pack as much information as possible into this book, I am assuming that you have basic networking skills and some experience managing Windows-based networks and don’t need me to explain the basic structure and architecture of an operating system. Therefore, I won’t waste your time answering such questions as, “What’s the point of networks?” or “Why use Windows Server 2012 R2?” or “What’s the difference between the GUI and the command line?” Instead, I start with a discussion of what Windows Server 2012 R2 has to offer so that you can learn about changes that will most affect you, and then I follow this discussion with a comprehensive, informative look at Windows Server 2012 R2 planning and installation.

Getting to know Windows Server 2012 R2

A primary purpose of Windows Server 2012 R2 is to ensure that the operating system can be optimized for use in small, medium, and large enterprises. An edition of the server operating system is available to meet your organization’s needs whether you want to deploy a basic server for hosting applications, a network server for hosting domain services, a robust enterprise server for hosting essential applications, or a highly available data center server for hosting critical business solutions.

Windows Server 2012 R2 is available for production use only on 64-bit hardware. Sixty-four-bit computing has changed substantially since it was first introduced for Windows operating systems. Computers running 64-bit versions of Windows not only perform better and run faster than their 32-bit counterparts but also are more scalable because they can process more data per clock cycle, address more memory, and perform numeric calculations faster. The primary 64-bit architecture Windows Server 2012 R2 supports is based on 64-bit extensions to the x86 instructions set, which is implemented in AMD64 processors, Intel Xeon processors with 64-bit extension technology, and other processors. This architecture offers native 32-bit processing and 64-bit extension processing, allowing simultaneous 32-bit and 64-bit computing.

Sixty-four-bit computing is designed for performing operations that are memory-intensive and require extensive numeric calculations. With 64-bit processing, applications can load large data sets entirely into physical memory (that is, random access memory [RAM]), which reduces the need to page to disk and increases performance substantially.

Note

In this text, I typically refer to 32-bit systems designed for x86 architecture as 32-bit systems and 64-bit systems designed for x64 architecture as 64-bit systems. Support for Itanium 64-bit (IA-64) processors is no longer standard in Windows operating systems.

Running instances of Windows Server 2012 R2 can be in either a physical operating system environment or a virtual operating system environment. To support mixed environments better, Microsoft introduced a new licensing model based on the number of processors, users, and virtual operating system environments. Thus, the four main product editions can be used as follows:

  • Windows Server 2012 R2 Foundation. Has limited features and is available only from original equipment manufacturers (OEMs). This edition supports one physical processor, up to 15 users, and one physical environment, but it does not support virtualized environments. Although there is a specific user limit, a separate client access license (CAL) is not required for every user or device accessing the server.

  • Windows Server 2012 R2 Essentials. Has limited features. This edition supports up to two physical processors, up to 25 users, and one physical environment, but it does not support virtualized environments. Although there is a specific user limit, a separate CAL is not required for every user or device accessing the server.

  • Windows Server 2012 R2 Standard. Has all the key features. It supports up to 64 physical processors, one physical environment, and up to two virtual instances. Two incremental virtual instances and two incremental physical processors are added for each Standard license. Thus, a server with four processors, one physical environment, and four virtual instances would need two Standard licenses, and the same server with eight virtual environments would need four Standard licenses. CALs are required for every user or device accessing the server.

  • Windows Server 2012 R2 Datacenter. Has all the key features. It supports up to 64 physical processors, one physical environment, and unlimited virtual instances. Two incremental physical processors are added for each Datacenter license. Thus, a server with two processors, one physical environment, and 32 virtual instances would need only one Datacenter license, but the same server with four processors would need two Datacenter licenses. CALs are required for every user or device accessing the server.

Note

Windows Server 2012 R2 Datacenter is not available for retail purchase. If you want to use the Datacenter edition, you need to purchase it through Volume Licensing, an OEM, or a Services Provider License Agreement (SPLA).

You implement virtual operating system environments by using Hyper-V, a virtual-machine technology that enables multiple guest operating systems to run concurrently on one computer and provides separate applications and services to client computers, as shown in Figure 1-1. As part of the Hyper-V role, which can be installed on servers with x64-based processors that implement hardware-assisted virtualization and hardware data execution protection, the Windows hypervisor acts as the virtual machine engine, providing the necessary layer of software for installing guest operating systems. For example, you can use this technology to run Ubuntu, Linux, and Windows Server 2012 R2 concurrently on the same computer.

This is a diagram showing multiple operating systems running through virtual machine technology.

Figure 1-1. A conceptual view of virtual machine technology.

Note

With Hyper-V enabled, Windows Server 2012 R2 Standard and Windows Server 2012 R2 Datacenter support up to 320 logical processors. Otherwise, these operating systems support up to 640 logical processors.

For traffic routing between virtual and physical networks, Windows Server 2012 R2 includes Windows Server Gateway, which is integrated with Hyper-V Network Virtualization. You can use Windows Server Gateway to route network traffic regardless of where resources are located, enabling you to support integration of public and private cloud services with your internal networks and integration of multitenant implementations with Network Address Translation (NAT) and virtual private networks (VPNs).

Hyper-V also is included as a feature of Windows 8.1 Pro and Windows 8.1 Enterprise. The number of virtual machines you can run on any individual computer depends on the computer’s hardware configuration and workload. During setup, you specify the amount of memory available to a virtual machine. Although that memory allocation can be changed, the amount of memory actively allocated to a virtual machine cannot be otherwise used. Virtualization can offer performance improvements, reduce the number of servers, and reduce the total cost of ownership (TCO).

Windows 8.1 and Windows Server 2012 R2

Like Windows Server 2012 R2, Windows 8.1 has several main editions. These editions include the following:

  • Windows 8.1. The entry-level operating system designed for home users

  • Windows 8.1 Pro. The basic operating system designed for use in Windows domains

  • Windows 8.1 Enterprise. The enhanced operating system designed for use in Windows domains with extended management features

Windows 8.1 Pro and Windows 8.1 Enterprise are the only editions intended for use in Active Directory domains. You can manage servers running Windows Server 2012 R2 from a computer running Windows 8.1 Pro or Windows 8.1 Enterprise by using the Remote Server Administration Tools (RSAT) for Windows 8.1. Download the tools from the Microsoft Download Center (http://download.microsoft.com).

Windows 8.1 uses the NT 6.3 kernel, the same kernel that Windows Server 2012 R2 uses. Sharing the same kernel means that Windows 8.1 and Windows Server 2012 R2 share the following components, among others:

  • Automatic Updates. Responsible for performing automatic updates to the operating system. This ensures that the operating system is up to date and has the most recent security updates. If you update a server from the standard Windows Update to Microsoft Update, you can get updates for additional products. By default, automatic updates are installed but not enabled on servers running Windows Server 2012 R2. You can configure automatic updates by using the Windows Update utility in Control Panel.

  • BitLocker Drive Encryption. Provides an extra layer of security for a server’s hard disks. This protects the disks from attackers who have physical access to the server. BitLocker encryption can be used on servers with or without a Trusted Platform Module (TPM). When you add this feature to a server by using the Add Roles And Features Wizard, you can manage it by using the BitLocker Drive Encryption utility in Control Panel.

  • Remote Assistance. Provides an assistance feature that enables an administrator to send a remote assistance invitation to a more senior administrator. The senior administrator can then accept the invitation to view the user’s desktop and temporarily take control of the computer to resolve a problem. When you add this feature to a server by using the Add Roles And Features Wizard, you can manage it by using options on the Remote tab of the System Properties dialog box.

  • Remote Desktop. Provides a remote connectivity feature that enables you to connect to and manage a server from another computer. By default, Remote Desktop is installed but not enabled on servers running Windows Server 2012 R2. You can manage the Remote Desktop configuration by using options on the Remote tab of the System Properties dialog box. You can establish remote connections by using the Remote Desktop Connection utility.

  • Task Scheduler. Enables you to schedule execution of one-time and recurring tasks, such as tasks used for performing routine maintenance. Like Windows 8.1, Windows Server 2012 R2 makes extensive use of the scheduled task facilities. You can view and work with scheduled tasks in Computer Management.

  • Desktop Experience. Installs additional Windows 8.1 desktop functionality on a server. You can use this feature when you use Windows Server 2012 R2 as your desktop operating system. When you add this feature by using the Add Roles And Features Wizard, the server’s desktop functionality is enhanced, and these programs are installed: Windows Media Player, desktop themes, Video for Windows (AVI support), Disk Cleanup, Sync Center, Sound Recorder, Character Map, and Snipping Tool.

  • Windows Firewall. Helps protect a computer from attack by unauthorized users. Windows Server 2012 R2 includes a basic firewall called Windows Firewall and an advanced firewall called Windows Firewall With Advanced Security. By default, the firewalls are not enabled on server installations.

  • Windows Time. Synchronizes the system time with world time to ensure that the system time is accurate. You can configure computers to synchronize with a specific time server. The way Windows Time works depends on whether a computer is a member of a domain or a workgroup. In a domain, domain controllers are used for time synchronization, and you can manage this feature through Group Policy. In a workgroup, you use Internet time servers for time synchronization, and you can manage this feature through the Date And Time utility.

  • Wireless LAN Service. Installs the Wireless LAN Service feature to enable wireless connections. Wireless networking with Windows Server 2012 R2 works the same as it does with Windows 8.1. If a server has a wireless adapter, you can enable this feature by using the Add Roles And Features Wizard.

In most instances, you can configure and manage these core components in exactly the same way on both Windows 8.1 and Windows Server 2012 R2. Windows 8.1 and Windows Server 2012 R2 have many enhancements to improve security, such as memory randomization and other enhancements to prevent malware from inserting itself into startup and running processes. Windows 8.1 and Windows Server 2012 R2 use address space layout randomization (ASLR) to determine randomly how and where important data is stored in memory, which makes it much more difficult for malware to find the specific locations in memory to attack.

Windows 8.1 and Windows Server 2012 R2 require a processor that includes hardware-based Data Execution Prevention (DEP) support. DEP uses the Never eXecute (NX) bit to mark blocks of memory as data that should never be run as code. DEP has two specific benefits. It reduces the range of memory that malicious code can use and prevents malware from running any code in memory addresses marked as Never eXecute.

If your organization doesn’t use an enterprise malware solution, you’ll also be interested to know that Windows Defender for Windows 8.1 and Windows Server 2012 R2 has been upgraded to a more fully featured program. Windows Defender now protects against viruses, spyware, rootkits, and other types of malware. Windows Defender is also available on Server Core installations of Windows Server 2012 R2, though without the user interface. If you add Windows Defender as an option on a Server Core installation, the program is enabled by default.

Planning for Windows Server 2012 R2

Deploying Windows Server 2012 R2 is a substantial undertaking, even on a small network. Just the task of planning a Windows Server 2012 R2 deployment can be a daunting process, especially in a large enterprise. The larger the business, however, the more important it is for the planning process to be thorough and fully account for the proposed project’s goals and to lay out exactly how those goals will be accomplished.

Accommodating the goals of all the business units in a company can be difficult, and it is best accomplished with a well-planned series of steps that includes checkpoints and plenty of opportunity for management participation. The organization as a whole will benefit from your thorough preparation, and so will the information technology (IT) department. Careful planning can also help you avoid common obstacles by helping you identify potential pitfalls and then determine how best to avoid them or at least be ready for any unavoidable complications.

Your plan: The big picture

A clear road map can help with any complex project, and deploying Windows Server 2012 R2 in the enterprise is certainly a complex project. A number of firms have developed models to describe IT processes such as planning and systems management. For our purposes, I break down the deployment process into a roughly sequential set of tasks:

  1. Identify the team. For all but the smallest rollouts of a new operating system, a team of people will be involved in both the planning and deployment processes. The actual size and composition of this team will be different in each situation. Collecting the right mixture of skills and expertise will help ensure the success of your project.

  2. Assess your goals. Any business undertaking the move to Windows Server 2012 R2 has many reasons for doing so, only some of which are obvious to the IT department. You need to identify the goals of the entire company carefully before determining the scope of the project to ensure that all critical goals are met.

  3. Analyze the existing environment. Examine the current network environment, even if you think you know exactly how everything works—you will often find you are only partially correct. Gather hardware and software inventories, network maps, and lists of which servers are providing which services. Also, identify critical business processes and examine the administrative and security approaches that are currently in place. Windows Server 2012 R2 offers a number of improvements, and you’ll find it useful to know which ones are particularly important in your environment.

  4. Define the project scope. Project scope is often one of the more difficult areas to pin down and one that deserves particular attention in the planning process. Defining scope requires prioritizing the goals of the various groups within the organization and then realistically assessing what can be accomplished within an acceptable budget and time frame. It’s not often that the wish list of features and capabilities from the entire company can be fulfilled in the initial, or even a later, deployment.

  5. Design the new network environment. After you have pinned down the project scope, you must develop a detailed design for the new operating system deployment and the affected portions of the network. During this time, you should create documentation describing the end state of the network and the process of getting there. This design document serves as a road map for the people building the testing environment and, with refinements during the testing process, for the IT department later.

  6. Test the design. Thorough testing in the lab is an often overlooked but critically important phase of deploying a new network operating system. By building a test lab and putting a prototype environment through its paces, you can identify and solve many problems in a controlled environment rather than in the field.

  7. Install Windows Server 2012 R2. After you have validated your design in the lab and management has approved the deployment, you can begin to install Windows Server 2012 R2 in your production environment. The installation process has two phases:

    • Pilot phase. During the pilot phase, you deploy and test a small group of servers running Windows Server 2012 R2 (and perhaps clients running Windows 8.1) in a production environment. You should pick a pilot group that is comfortable working with new technology and for which minor interruptions will not pose significant problems. In other words, this is not a good thing to do to the president of the company or the finance department just before taxes are due.

    • Rollout. After you have determined that the pilot phase was a success, you can begin the rollout to the rest of the company. Make sure you schedule adequate downtime and allow for ongoing minor interruptions and increased support demands as users encounter changed functionality.

As mentioned, these steps are generally sequential but not exclusively so. You are likely to find that as you work through one phase of planning, you must return to activities that are technically part of an earlier phase. This is actually a good thing because it means you are refining your plan dynamically as you discover new factors and contingencies.

Identifying your organizational teams

A project like this requires a lot of time and effort and a broad range of knowledge, expertise, and experience. Unless you are managing a very small network, this project is likely to require more than one person to plan and implement it. Team members are assigned to various roles, each of which is concerned with a different aspect of the project.

Each of these roles can be filled by one or more persons, devoting all or part of their workday—and beyond in some cases—to the project. No direct correlation exists between a team role and a single individual who performs it. In a large organization, a team of individuals might fulfill each of these roles, whereas in a small organization, one person can fill more than one role.

As with IT processes, a number of vendors and consultants have put together team models, which you can use in designing your own team. Specific teams you might want to use include:

  • Architecture team. In increasingly complex IT environments, someone needs to be responsible for overall project architecture and providing guidance for integrating the project into existing architecture. This role is filled by the architecture team. Specific deliverables include the architecture design and guidance for the integration solution.

  • Program management team. Program management’s primary responsibility is ensuring that project goals are met within the constraints set forth at the beginning of the project. Program management handles the functional design, budget, schedule, and reporting. Specific deliverables include a vision or scope document, functional specifications, a master project plan, a master project schedule, and status reports.

  • Product management team. This team is responsible for identifying the business and user needs of the project and ensuring that the final plan meets those needs. Specific deliverables include the project charter, team orientation guidance and documents for project structure and initial risk assessment.

  • User experience team. This team manages the transition of users to the new environment. This includes developing and delivering user training and conducting an analysis of user feedback during testing and the pilot deployment. Specific deliverables include user reference manuals, usability test scenarios, and user interface graphical elements.

  • Development team. The development team is responsible for defining the physical design and feature set of the project and estimating the budget and time needed for project completion. Specific deliverables include any necessary source code or binaries and necessary integrated-solution components.

  • Testing team. The testing team is critical in ensuring that the final deployment is successful. It designs and builds the test environment, develops a testing plan, and then performs the tests and resolves any issues it discovers before the pilot deployment occurs. Specific deliverables include test specifications, test cases with expected results, test metrics, test scripts, test data, and test reports.

  • Release management team. The release management team designs the test deployment and then performs that deployment as a means of verifying the reliability of the deployment before widespread adoption. Specific deliverables include deployment processes and procedures, installation scripts and configuration settings for deployment, operations guides, help desk and support procedures, knowledge base, help and training materials, operations documentation, and troubleshooting documentation.

Working together, these teams cover the various aspects of a significant project such as rolling out Windows Server 2012 R2. Although all IT projects have some things in common, and therefore need someone to handle those areas of the project, that’s where the commonality stops. Each company has IT needs related to its specific business activities. This might mean additional team members are needed to manage those aspects of the project. For example, if external clients, the public, or both also access some of your IT systems as users, you have a set of user acceptance and testing requirements different from many other businesses.

The project team needs business managers who understand and can represent the needs of the various business units. This requires knowledge of the business operations and a clear picture of the daily tasks staff performs.

Representatives of the IT department bring their technical expertise to the table not only to detail the inner workings of the network but also to help business managers realistically assess how technology can help their departments and separate the impractical goals from the realistic ones.

Make sure that all critical aspects of business operations are covered—include representatives from all departments that have critical IT needs and be sure the team takes the needs of the entire company into account. This means that people on the project team must collect information from line-of-business managers and the people actually doing the work. (Surprisingly enough, the latter escapes many a project team.)

After you have gathered a team, management must ensure that team members have adequate time and resources to fulfill the tasks required of them for the project. This can mean shifting all or part of their usual workload to others for the project duration or providing resources such as Internet access, project-related software, and so on. Any project is easier—and more likely to be successful—with this critical real-time support from management.

Assessing project goals

Carefully identifying the goals behind moving to Windows Server 2012 R2 is an important part of the planning process. Without a clear list of objectives, you are unlikely to achieve them. Even with a clear set of goals in mind, it is unlikely you will accomplish them all. Most large business projects involve some compromises, and the process of deploying Windows Server 2012 R2 is unlikely to be an exception.

Although deploying a new operating system is ultimately an IT task, most of the reasons behind the deployment won’t be coming from the IT department. Computers are, after all, tools business uses to increase productivity, enhance communications, facilitate business tasks, and so on; the IT department is concerned with making sure that the computer environment the business needs is implemented.

The business perspective

Many discussions of the business reasons for new software deployments echo common themes: enhance productivity, eliminate downtime, reduce costs, and the like. Translating these often somewhat vague (and occasionally lofty) aspirations into concrete goals sometimes takes a bit of effort. It is well worth taking the time, however, to refine the big picture into specific objectives before moving on. An IT department should serve the needs of the business, not the other way around; if you don’t understand those needs clearly, you’ll have a hard time fulfilling them.

Be sure to ask for the input of people close to where the work is being done—department managers from each business area should be asked about what they need from IT, what works now, and what doesn’t. These people care about the day-to-day operations of their computing environment. Will the changes help their staff members do their work? Ask about work patterns, both static and burst—the finance department’s workflow is not the same in July as it is in April. Make sure to include all departments and any significant subsets—human resources (HR), finance, sales, business units, executive management, and so on.

You should also identify risks that lie at the business level, such as resistance to change, lack of commitment (frequently expressed as inadequate resources: budget, staff, time, and so on), or even the occasional bit of overt opposition. At the same time, look for positives to exploit; enthusiastic staff can help energize others, and having a manager in your corner can smooth many bumps along the way. By getting people involved, you can gain allies who are vested in the success of the project.

Identifying IT goals

IT goals are often obvious: improve network reliability, provide better security, deliver enhanced administration, and maybe even implement a particular new feature. They are also easier to identify than those of other departments—after all, they are directly related to technology.

When you define your goals, make sure that you are specific. It is easy to say you will improve security, but how will you know when you have done so? What’s improved and by how much? In many cases, IT goals map to the implementation of features or procedures; for example, to improve security, you will implement Internet Protocol Security (IPsec) and encrypt all traffic to remote networks.

Don’t overpromise, either—eliminating downtime is a laudable goal but not one you are likely to achieve on your network and certainly not one on which you want your next review based.

Examining the interaction between IT and business units

A number of aspects of your organization’s business should be considered when evaluating your overall IT requirements and the business environment in which you operate. Consider things such as the following:

  • Business organization. How large is the business? Are there offices in more than one location? Does the business operate across international, legal, or other boundaries? What sorts of departmental or functional boundaries exist?

  • Stability. Does the business undergo a lot of change? Are there frequent reorganizations, acquisitions, changes, and the like in business partnerships? What is the expected growth rate of the organization? Conversely, are substantial downsizings planned in the future?

  • External relationships. Do you need to provide access to vendors, partners, and so on? Are there external networks that people operating on your network must access?

  • Impact of Windows Server 2012 R2 deployment. How will this deployment affect the various departments in your company? Are any areas of the company particularly intolerant of disruption? Are there upcoming events that must be considered in scheduling?

  • Adaptability. Is management easily adaptable to change? If not, make sure you get every aspect of your plan right the first time. Having an idea of how staff might respond to new technologies and processes can help you plan for education and support.

Predicting network change

Part of planning is projecting into the future and predicting how future business needs will influence the activities of the IT department. Managing complicated systems is easier when it’s done from a proactive stance rather than a reactive one. Predicting network change is an art, not a science, but it behooves you to hone your skills at it.

This is primarily a business assessment, based on things such as expected growth, changes in business focus, or possible downsizing and outsourcing—each of which provides its own challenges to the IT department. Being able to predict what will happen in the business and what those changes will mean to the IT department enables you to include room for expansion in your network design.

When attempting to predict what will happen, look at the history of the company. Are mergers, acquisitions, spin-offs, and so on common? If so, this indicates a considerable need for flexibility from the IT department and the need to keep in close contact with people on the business side to avoid being blindsided by a change in the future.

As people meet to discuss the deployment, talk about what is coming up for the business units. Cultivate contacts in other parts of the company and talk with those people regularly about what’s going on in their departments, such as upcoming projects and what’s happening with other companies in the same business sector. Reading the company’s news releases and articles in outside sources can also provide valuable hints of what’s to come. By keeping your ear to the ground, doing a little research, and thinking through the potential impact of what you learn, you can be much better prepared for whatever is coming up next.

Analyzing the existing network

Before you can determine the path to your new network environment, you must determine where you are right now in terms of your existing network infrastructure. This requires determining a baseline for network and system hardware, software installation and configuration, operations, management, and security. Don’t rely on what you think is the case; actually verify what is in place.

Evaluating the network infrastructure

You should get an idea of what the current network looks like before moving to a new operating system. You will require configuration information while designing the modifications to the network and deploying the servers. In addition, some aspects of Windows Server 2012 R2, such as the sites used in Active Directory replication, are based on your physical network configuration. (A site is a segment of the network with good connectivity, consisting of one or more Internet Protocol [IP] subnets.)

For reasons such as this, you want to assess a number of aspects related to your physical network environment. Consider such characteristics as the following:

  • Network topology. Document the systems and devices on your network, including link speeds, wide area network (WAN) connections, sites using dial-up connections, and so on. Include devices such as routers, switches, servers, and clients, noting all forms of addressing such as computer names and IP addresses for Windows systems.

  • Network addressing. Are you currently employing Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6)? What parts of the address space are private, and what parts are public? Which IP subnets are in use at each location?

  • Remote locations. How many physical locations does the organization have? Are they all using broadband connections, or are there remote offices that connect sporadically by dial-up? What is the speed of those links?

  • Traffic patterns. Monitoring network traffic can provide insights into current performance and help you identify potential bottlenecks and other problems before they occur. Examine usage statistics, paying attention to both regularly occurring patterns and anomalous spikes or lulls, which might indicate a problem.

  • Special cases. Do any portions of the network have out-of-the-ordinary configuration needs such as test labs that are isolated from the rest of the network?

Assessing systems

As part of planning, you should inventory the existing network servers, identifying each system’s operating system version, IP address, Domain Name System (DNS) names, and the services provided by that system. Collect such information by performing the following tasks:

  • Inventory hardware. Conduct a hardware inventory of the servers on your network, noting central processing unit (CPU), RAM, disk space, and so on. Pay particular attention to older machines that might present compatibility issues if upgraded. You can use the Microsoft Assessment and Planning (MAP) Toolkit, Microsoft System Center Configuration Manager (SCCM), or other tools to help you with the hardware inventory.

  • Identify operating systems. Determine the current operating system on each computer, including the entire version number (even if it runs to many digits), in addition to service packs, hot fixes, and other post-release additions.

  • Assess your current Windows domains. Do you have only Windows domains on the network? Are all domains using Active Directory? Do you have multiple Active Directory forests? If you have multiple forests, detail the trust relationships. List the name of each domain, what it contains (users, resources, or both), and which servers are acting as domain controllers.

  • Identify localization factors. If your organization crosses international boundaries, language boundaries, or both, identify the localized versions of Windows Server in use and the locations in which they are used. This is critical when upgrading to Windows Server 2012 R2 because attempting an upgrade using a different localized version of Windows Server 2012 R2 might fail.

  • Assess software licenses. Evaluate licenses for servers and client access. This helps you select the most appropriate licensing program.

  • Identify file storage. Review the contents and configuration of existing file servers, identifying partitions and volumes on each system. Identify existing distributed file system (DFS) servers and the contents of DFS shares. Don’t forget shares used to store user data.

You can gather hardware and software inventories of computers that run the Windows operating system by using a tool such as SCCM. Review the types of clients that must be supported so that you can configure servers appropriately. This is also a good time to determine any client systems that must be upgraded (or replaced) to use Windows Server 2012 R2 functionality. You can also gather this information with scripts or a software management program.

Identify network services and applications

Look at your current network services, noting which services are running on which servers and the dependencies of these services. Do this for all domain controllers and member servers that you’ll be upgrading. You’ll use this information later to plan for server placement and service hosting on the upgraded network configuration. Some examples of services to document are as follows:

  • DNS services. You must assess your current DNS configuration. If you’re currently using a non-Microsoft DNS server, you want to plan DNS support carefully because Active Directory relies on Windows Server 2012 R2 DNS. If you’re using Microsoft DNS but are not using Active Directory–integrated zones, you might want to plan a move to Active Directory-integrated zones.

  • WINS services. You should assess the use of Network Basic Input/Output System (NetBIOS) by older applications and computers running early versions of the Windows operating system to determine whether NetBIOS support (such as Windows Internet Naming Service [WINS]) will be needed in the new network configuration. If you’ve removed older applications and computers running early versions of the Windows operating system from your organization, support for WINS is no longer needed. You can remove the WINS Server feature from your servers by using the Remove Roles And Features Wizard. When you remove this feature, the WINS Server service also is removed because it is no longer needed.

  • File shares. Standard file shares use Server Message Block (SMB), a client-server technology for distributing files over networks. Windows desktop operating systems have an SMB client. Windows Server operating systems also have SMB server technology. Current Windows operating systems support SMB 3.0, which supports end-to-end encryption and eliminates the need for IPsec to protect SMB data in transit. If you’ve removed all computers running Windows XP and Windows Server 2003 from your organization, neither support for SMB 1.0 nor the Computer Browser service that SMB 1.0 used are needed. You can remove the SMB 1.0/CIFS File Sharing Support feature from your servers by using the Remove Roles And Features Wizard. When you remove this feature, the Computer Browser service also is removed because it is no longer needed.

  • Print services. List printers and the print server assigned to each one. Consider who is assigned to the various administrative tasks and whether the printer will be published in Active Directory. Also, determine whether all the print servers will be upgraded in place or whether some will be consolidated.

  • Network applications. Inventory your applications, creating a list of the applications that are currently on the network, including the version number (and post-release updates and such), which server hosts it, and how important each application is to your business. Use this information to determine whether upgrades or modifications are needed. Watch for software that is never used and thus need not be purchased or supported—every unneeded application you can remove represents savings of both time and money.

This list is only the beginning. Your network will undoubtedly have many more services that you must take into account.

Caution

Make sure that you determine any dependencies in your network configuration. Discovering after the fact that a critical process relied on the server that you just decommissioned will not make your job any easier. You can find out which Microsoft and third-party applications are certified to be compatible with Windows Server 2012 R2 in the Windows Server Catalog (http://www.windowsservercatalog.com/).

Identifying security infrastructure

When you document your network infrastructure, you will need to review many aspects of your network security. In addition to security concerns that are specific to your network environment, the following factors should be addressed:

  • Consider exactly who has access to what and why. Identify network resources, security groups, and assignment of access permissions.

  • Determine which security protocols and services are in place. Are adequate virus protection, firewall protection, email filtering, and so on in place? Do any applications or services require older NTLM authentication? Have you implemented a public key infrastructure (PKI) on your network?

  • Examine auditing methods and identify the range of tracked access and objects.

  • Determine which staff members have access to the Internet and which sorts of access they have. Look at the business case for access that crosses the corporate firewall—does everyone who has Internet access actually need it, or has it been provided across the board because it was easier to provide blanket access than to provide access selectively? Such access might be simpler to implement, but when you look at Internet access from the security perspective, it presents many potential problems.

  • Consider inbound access, too; for example, can employees access their information from home? If so, examine the security that is in place for this type of access.

Important

Security is one area in which well-established methods matter—pay particular attention to all established policies and procedures, what has been officially documented, and what isn’t documented as well.

Depending on your existing network security mechanisms, the underlying security methods can change upon deployment of Windows Server 2012 R2. Windows Server 2003 is the minimum forest and domain functional level Windows Server 2012 R2 supports. When the forest and domain functional levels are raised to this level or higher from a lower level, Kerberos is the default authentication mechanism used between computer systems. This also means that although the Windows NT 4 security model (using NTLM authentication) continues to be supported, it is no longer the default authentication mechanism.

Reviewing network administration

Examining the administrative methods currently in use on your network provides you with a lot of information about what you are doing right and identifies areas that could use some improvement. Use this information to tweak network procedures where needed to optimize the administration of the new environment.

Network administrative model Each company has its own type of approach to network administration—some are very centralized, with the IT department making even the smallest changes, whereas others are partially managed by the business units, which control aspects such as user management. Administrative models fit into these categories:

  • Centralized. Administration of the entire network is handled by one group, perhaps in one location, although not necessarily. This provides a high degree of control at the cost of requiring IT staff for every change to the network, no matter how small.

  • Decentralized. This administrative model delegates more of the control of day-to-day operations to local administrators of some sort, often departmental. A central IT department might still manage certain aspects of network management in that a network with decentralized administration often has well-defined procedures controlling exactly how each administrative task is performed.

  • Hybrid. On many networks, a blend of these two methods is used. A centralized IT department performs many tasks (generally the more difficult, delicate operations and those with the broadest impact on the network) but delegates simpler tasks (such as user management) to departmental or group administrators.

Disaster recovery The costs of downtime caused by service interruption or data loss can be substantial, especially in large enterprise networks. As part of your overall planning, determine whether a comprehensive IT disaster recovery plan is in place. If one is in place, this is the time to determine its scope and effectiveness and verify that it is being followed. If one isn’t in place, this is the time to create and implement one.

Document the various data sets being archived, schedules, backup validation routine, staff assignments, and so on. Make sure there are provisions for offsite data storage to protect your data in the case of a catastrophic event such as a fire, earthquake, or flood.

Examine the following:

  • Systems and servers. Are all critical servers backed up regularly? Are secondary servers, backup servers, or both available in case of system failure?

  • Enterprise data. Are regular backups made of core enterprise data stores such as databases, Active Directory, and the like?

  • User information. Where is user data stored? Is it routinely archived? Does the backup routine get all the information that is important to individuals, or is some of it stored on users’ personal machines and thus not archived?

Caution

Whatever your current disaster recovery plan is, make sure it is being followed before you start making major changes to your network. Although moving to Windows Server 2012 R2 should not present any major problems on the network, it’s always better to have your backups and not need them than the other way around.

Network management tools This is an excellent time to assess your current suite of network management tools. Pay particular attention to those that are unnecessary, incompatible, redundant, inefficient, or otherwise not terribly useful. You might find that some of the functionality of those tools is present natively in Windows Server 2012 R2. Assess the following aspects of your management tools:

  • Identify the tools currently in use, which tasks they perform, who uses them, and so on. Make note of administrative tasks that could be eased with additional tools.

  • Decide whether the tools you identified are actually used. A lot of software ends up sitting on a shelf (or on your hard disk drive) and never being used. Identifying which tools are truly needed and eliminating those that aren’t can save you money and simplify the learning curve for network administrators.

  • Disk-management and backup tools deserve special attention because of file-system changes in Windows Server 2012 R2. These tools are likely to require upgrading to function correctly under Windows Server 2012 R2.

Defining objectives and scope

A key aspect of planning any large-scale IT deployment of an operating system is determining the overall objectives for the deployment and the scope of users, computers, networks, and organization divisions that are affected. The fundamental question of scope is this: What can you realistically expect to accomplish in the given time within existing project constraints such as staffing and budget?

Some of the objectives that you identified in the early stages of the project are likely to change as constraints become more apparent and new needs and requirements emerge. To start, you must identify who will be affected—which organizational subdivisions and which personnel—and who will be doing what. These are questions that map to the business goals that must be accomplished.

You also must identify the systems that will be affected—the WANs, local area networks (LANs), subnets, servers, and client systems. In addition, you must determine the software that will be changed—the server software, client software, and applications.

Specifying organizational objectives

Many goals of the various business units and IT are only loosely related, whereas others are universal—everyone wants security, for example. Take advantage of where goals converge to engage others in the project. If people can see that their needs are met, they are more likely to support others’ goals and the project in general.

You have business objectives at this point; now they must be prioritized. You should make lists of various critical aspects of projects and dependencies within the project plan as part of the process of winnowing the big picture into a set of realistic objectives. Determine what you can reasonably accomplish within the constraints of the current project. Also, decide what is outside the practical scope of this Windows Server 2012 R2 deployment but is still important to implement later.

The objectives that are directly related to the IT department will probably be clearer—and more numerous—after completing the analysis of the current network. These objectives should be organized to conform to existing change-management procedures within your enterprise network.

When setting goals, be careful not to promise too much. Although it’s tempting and sometimes easier in the short run to try to do everything, you can’t. It’s unlikely that you will implement every single item on every person’s wish list during the first stage of this project, if at all. Knowing what you can’t do is as important as knowing what you can.

Setting the schedule

You should create a project schedule, laying out the timeline, tasks, and staff assignments. Including projected completion dates for milestones helps you keep on top of significant portions of the project and ensures that dependencies are managed.

You must be realistic when considering timelines—not just a little bit realistic, but really realistic. This is, after all, your time you are allocating. Estimate too short a time, and you are likely to spend evenings and weekends at the office with some of your closest coworkers.

A number of tasks will be repeated many times during the rollout of Windows Server 2012 R2, which should make estimating the time needed for some things fairly simple: a 1-hour process repeated 25 times takes 25 hours (unless it’s automated). If, for example, you are building 25 new servers in-house, determine the actual time needed to build one and then do the math.

When you have a rough idea of the time required, do the following:

  • Assign staff members to the various tasks to make sure you have adequate staff assistance to complete the project.

  • Add some time to your estimates—IT projects always seem to take more time than you thought they would. This is the only buffer you are likely to get, so make sure you build in some extra time from the start.

  • As much as possible, verify how long individual tasks take. You might be surprised at how much time you spend doing a seemingly simple task, and if your initial estimate is significantly off, you could end up running significantly short on time.

  • Develop a schedule that clearly shows who is doing what and when they are doing it.

  • Get drop-dead dates, which should be later than the initial target date.

  • Post the schedule in a place where the team, and perhaps other staff, can view it. Keep this schedule updated with milestones reached, changes to deliverable dates, and so on.

Note

You might want to use a project-management tool, such as Microsoft Project, to develop the schedule. This sort of tool is especially useful when managing a project with a number of staff members working on a set of interdependent tasks.

Shaping the budget

Determining the budget is a process constrained by many factors, including but not limited to IT-related costs for hardware and licensing. In addition to fixed IT costs, you also must consider the project scope and the non-IT costs that can come from the requirements of other departments within the organization. Thus, to come up with the budget, you need information and assistance from all departments within the organization, and you must consider all aspects of the project.

Many projects end up costing more than is initially budgeted. Sometimes this is predictable and preventable with proper research and a bit of attention to ongoing expenses. As with timelines, pad your estimates a little bit to allow for the unexpected. Even so, it helps if you can find out how much of a buffer you have for any cost overruns.

In planning the budget, also keep in mind fiscal periods. If your project is crossing budget periods, find out whether next year’s budget for the project is allocated and approved.

Allowing for contingencies

No matter how carefully you plan any project, it is unlikely that everything will go exactly as planned. Accordingly, you should plan for contingencies. By having a number of possible responses to unforeseen events ready, you can manage the vagaries of the project better.

Start with perhaps the most common issue encountered during projects: problems getting the assigned people to do the work. This all-too-common problem can derail any project or at least cause the project manager a great deal of stress. After all, the ultimate success of any project depends on people doing their assigned tasks. Many of these people are already stretched pretty thin, however, and you might encounter times when they aren’t quite getting everything done. Your plan should include what to do in this circumstance—is the person’s manager brought in, or is a backup person automatically assigned to complete the job?

Another possibility to plan for is a change in the feature set being implemented. If such shifts occur, you must decide how to adjust to compensate for the reallocated time and money required. To make this easier, identify and prioritize the following:

  • Objectives that could slip off this project and be placed in a later one if the need arises

  • Objectives that you want to slip into the project if the opportunity presents itself

Items on both of these lists should be relatively small and independent of other processes and services. Avoid incurring additional expenses; you are more likely to be given extra time than extra funding during your deployment.

In general, ask yourself what could happen to cause significant problems along the way. Then, more important, consider what you would do in response. By thinking through potential problems ahead of time and planning what you might do in response, you can be prepared for many of the inevitable bumps along the way.

Finalizing project scope

You have goals, know the timeline, and have a budget pinned down—now it’s time to get serious. Starting with the highest-priority aspects of the project, estimate the time and budget needed to complete each portion. Work your way through the planned scope, assessing the time and costs associated with each portion of the project. This helps ensure that the time and budget are sufficient to complete the project successfully as designed.

As you finalize the project plan, each team member should review the final project scope, noting any concerns or questions he has about the proposal. Encourage the team to look for weak spots, unmet dependencies, and other places where the plan might break down. Although it is tempting to ignore potential problems that are noted this late in the game, you do so at your own peril. Avoiding known risks is much easier than recovering from unforeseen ones.

Defining the new network environment

When you have determined the overall scope of your Windows deployment project and the associated network changes, you must develop the technical specifications for the project, detailing server configuration, changes to the network infrastructure, and so on. As much as possible, describe the process of transitioning to the new configuration. Care should be taken while developing this document because it will serve as the road map for the actual transition, much of which is likely to be done by staff members who were not in the planning meetings.

In defining the new (updated) network environment, you must review the current and projected infrastructure for your network. Analyze the domains in use on your network and evaluate the implications for security operations and network performance.

If you are implementing Active Directory for the first time, designing the domain architecture will probably take a substantial amount of work. Businesses already using Microsoft Windows Server 2008 or Windows Server 2008 R2 to manage their network, however, will probably not have to change much, if they change anything at all. Also, consider whether you will be changing the number of domains you currently have. Will you be getting rid of any domains through consolidation?

Impact on network operations

You also must assess the impact of the projected changes on your current network operations. Consider issues such as the following:

  • Will network traffic change in ways that require modifications to the network infrastructure? Assess additional loads on each network segment and across WAN links.

  • Do you need to make changes to network naming or addressing schemes? Are new DNS namespaces needed and, if so, have the DNS names been registered?

  • Will you use read-only domain controllers (RODCs) in remote offices? If so, will you also use read-only DNS (RO DNS) zones?

  • Can you phase out NetBIOS and WINS reliance completely? If so, will you use Link Local Multicast Name Resolution (LLMNR) and DNS global names?

Identify security requirements

This is a good time to review seriously the security measures implemented on your network. Scrutinize the security devices, services, protocols, and administrative procedures to ensure that they are adequate, appropriate, well documented, and adhered to rigorously.

Security in Windows Server 2012 RTM and Windows Server 2012 R2 are not the same as in early versions of Windows Server operating systems—the security settings for the default (new) installation of Windows Server 2012 RTM and Windows Server 2012 R2 are much tighter than in those early versions. This might mean that services that were functioning perfectly prior to an upgrade don’t work the same way afterward. Some services that were previously started by default are now disabled when first installed.

Assign staff members to be responsible for each aspect of your security plan and have them document the completion of tasks. Among the tasks that should be assigned are the following:

  • Applying regular updates of antivirus software. Antivirus software is only as good as its virus definition files, so make sure yours are current. This means checking the vendor site every day, even on weekends if possible. Many antivirus packages can perform automatic updates, but you should verify that the updates are occurring.

  • Reviewing security alerts. Someone should read the various sites that post security alerts on a regular basis, receive their newsletters and alerts, or do both. The sites should include Microsoft (http://technet.microsoft.com/security/), vendors of your other security software (for example, http://www.symantec.com/), network device vendors (for example, http://www.cisco.com/), and at least one nonvendor site (such as http://www.SANS.org/).

  • Checking for system software updates. IT staff should consider implementing the Windows Server Update Services (WSUS) to help keep up to date on security updates, service packs, and other critical updates for both servers and clients. Administrators can use WSUS to scan and download updates automatically to a centralized server and then configure Group Policy so that client computers get automatic updates from WSUS.

  • Checking for hardware firmware updates. It is important for the various devices on the network, especially security-related ones such as firewalls, to have up-to-date firmware.

Changing the administrative approach

While you are rolling out Windows Server 2012 R2 is an excellent time to fine-tune your administration methods and deal with any issues introduced by the growth and change in the project scope. Well-designed administrative methods with clearly documented procedures can make a huge difference in streamlining both the initial rollout and ongoing operations.

Active Directory provides the framework for flexible, secure network management so you can implement the administration method that works best in your environment. There are mechanisms that support both centralized and distributed administration; Group Policy options offer centralized control, and selected administrative capabilities can be securely delegated at a highly granular level. The combination of these methods allows administration to be handled by using the method that works best for each business in its unique circumstances.

Important

Make sure that all administrative tasks and processes are clearly defined and that each task has a person assigned to it.Some administrative changes will be required because of the way Windows Server 2012 R2 works. You might find that existing administration tools no longer work or are no longer needed, so be sure to question the following:

  • Whether your existing tools work under the new operating system. A number of older tools are incompatible with Windows Server 2012 R2—management utilities must be Active Directory–aware, work with NTFS, and so on.

  • Whether current tools will be needed after you move to Windows Server 2012 R2. If a utility such as PKZIP, for example, is in use now, it might not be required for operations under Windows Server 2012 R2, which has incorporated the functionality of ZIP into the operating system. Eliminating unneeded tools could well be one goal of the Windows Server 2012 R2 deployment project, and it will have a definite payoff for the IT department in terms of simplified management, lower costs, and so forth.

Select and implement standards

You will also want to select and implement standards. If your IT department has not implemented standards for naming and administration procedures, this is a good time to do so. You’ll be gathering information about your current configuration, which will show you the places where standardization is in place and the places where it would be useful.

Make sure that any standards you adopt allow for likely future growth and changes in the business. Using an individual’s first name and last initial is a very simple scheme for creating user names and works well in a small business. Small businesses, however, don’t necessarily stay small forever—even Microsoft initially used this naming scheme, although it has been modified greatly over the years.

You can also benefit from standardization of system hardware and software configuration. Supporting 100 servers (or clients) is much easier if they share a common set of hardware, are similarly configured, and have largely the same software installed. This is possible, of course, only to a limited degree and depends on the services and applications that are required from each system. Still, it’s worth considering.

When standardizing server hardware, keep in mind that the minimum functional hardware differs for various types of servers—that is, application servers have very different requirements than file servers. Also, consider the impact of the decisions the IT department makes on other parts of the company and individual employees. There are some obvious things to watch for such as unnecessarily exposing anyone’s personal data—surprising numbers of businesses and agencies still do this.

Change management

Formalized change-management processes are very useful, especially for large organizations and those with distributed administrative models. By creating structured change-control processes and implementing appropriate auditing, you can control the ongoing management of critical IT processes. This makes it easier to manage the network and reduces the opportunity for error.

Although this is particularly important when dealing with big-picture issues such as domain creation or Group Policy implementation, some organizations define change-control mechanisms for every possible change, no matter how small. You have to determine for which IT processes you must define change-management processes and find a balance between managing changes effectively and overregulating network management.

Even if you’re not planning to implement a formal change-control process, make sure that the information about the initial configuration is collected in one spot. By doing this, and by collecting brief notes about any changes that are made, you will at least have data about the configuration and the changes that have been made to it. This will also help later, if you decide to put more stringent change-control mechanisms in place, by providing at least rudimentary documentation of the current network state.

Final considerations for planning and deployment

If you are doing a new installation—perhaps for a new business or a new location of an existing one—you have a substantial amount of additional planning to do. This extends well beyond your Windows Server 2012 R2 systems to additional computers (clients, for a start), devices, services, applications, and so on.

The details of such a project are far beyond the scope of this book; indeed, entire books have been written on the topic. If you have to implement a network from the ground up, you might want to pick up one of those books.

You must plan the entire network, including areas such as the following:

  • Infrastructure architecture (including network topology, addressing, DNS, and so on)

  • Active Directory design

  • Servers and services

  • Administration methods

  • Network applications

  • Clients

  • Client applications

  • Client devices (printers, scanners, and the like)

This is a considerable undertaking and requires educated, dedicated staff, adequate time, and other resources.

Thinking about server roles and Active Directory

When planning for server usage, consider the workload of each server: which services it is providing, the expected user load, and so on. In small network environments, it is common for a single server to act as a domain controller and provide DNS and Dynamic Host Configuration Protocol (DHCP) services and possibly even additional services. In larger network environments, one or more standalone servers might provide each of these services rather than aggregating them on a single system.

Active Directory is an extremely complicated and critical portion of Windows Server 2012 R2, and you should plan for it with appropriate care. The following section discusses, in abbreviated form, some high-level aspects of server usage and Active Directory that you must consider. The section is meant to offer a perspective on how various server roles, including domain controllers, fit in the overall planning picture, not to explain how to plan for a new Active Directory installation.

Planning for server usage

Windows Server 2012 R2 employs a number of server roles, each of which corresponds to one or more services. Your plan should detail which roles (and additional services) are needed and the number and placement of servers and should define the configuration for each service. When planning server usage, be sure to keep the expected client load in mind and account for remote sites that might require additional servers to support local operations.

Key Windows Server 2012 R2 server roles are as follows:

  • Domain controller. Active Directory domain controllers are perhaps the most important type of network server on a Windows network. Domain controllers are also one of the most intensively used servers on a Windows network, so it is important to assess the operational requirements and server performance realistically for each one. Remember to take into account any secondary Active Directory–related roles the server will be performing (such as global catalog, operations master, and so on). Keep the following questions in mind:

    • How many domain controllers are required, and which ones will fulfill which roles?

    • Which domains must be present at which sites?

    • Where should global catalogs be placed?

    • What remote offices (if any) will use RODCs?

  • DNS server. DNS is an integral part of a Windows network, with many important features (such as Active Directory) relying on it. Accordingly, DNS servers are now a required element of your suite of network services. Plan for enough DNS servers to service client requests, with adequate redundancy for fault tolerance and performance, and plan to have them distributed throughout your network to be available to all clients. Factor in remote sites with slow links to the main corporate network and those that might be only intermittently connected by dial-up. Be sure to do the following:

    • Define both internal and external namespaces.

    • Plan the name-resolution path (forwarders and so on).

    • Determine the storage of DNS information (zone files, Active Directory–integrated application partitions).

    • Determine whether you need read-only DNS services at remote offices with RODCs.

    • Determine whether you need DNS Security Extensions (DNSSEC).

    Note

    Microsoft DNS is the recommended method of providing domain name services on a network with Active Directory deployed, although some other DNS servers provide the required functionality. In practice, however, the intertwining of Active Directory and DNS, along with the complexity of the DNS records Active Directory uses, has meant that Microsoft DNS is the one most often used with Active Directory.

    Note

    DNS information can be stored in traditional zone files, Active Directory–integrated zones, or application partitions. An application partition contains a subset of directory information a single application uses. In the case of DNS, this partition is replicated only to domain controllers that are also providing DNS services, minimizing network traffic for DNS replication. There is one application partition for the forest (ForestDnsZones) and another for each domain (DomainDnsZones).

  • DHCP server. DHCP simplifies the management of the IP address pool both server and client systems use. A number of operational factors regarding the use of DHCP should be considered:

    • Determine whether DNS servers will act as DHCP servers also and, if so, whether all of them or only a subset of them will be used in this way.

    • Define server configuration factors such as DHCP scopes and the assignment of scopes to servers in addition to client settings such as the DHCP lease length.

    • Determine whether failover scopes are needed to increase fault tolerance and provide redundancy.

  • WINS server. First, determine whether you still need WINS on your network. If you have legacy applications in your network environment, WINS might be required to translate NetBIOS names to IP addresses. If so, consider the following questions:

    • Which clients need to access the WINS servers?

    • What WINS replication configuration is required?

  • Network Policy And Access Services. Network Policy And Access Services provides integrated protection, routing, and remote-access services that facilitate secure, protected access by remote users. Consider the following questions:

    • Do you need protection policies?

    • Do you need to provide routing between networks?

    • Do you want to replace existing routers?

    • Do you have external users who need access to the internal network?

  • Hyper-V server. Hyper-V provides infrastructure for virtualizing applications and workloads. Use Hyper-V to:

    • Consolidate servers and workloads onto fewer, more powerful servers and reduce requirements for power and space.

    • Implement a centralized desktop strategy by combining Hyper-V technology with Remote Desktop Virtualization Host technology.

    • Create a private cloud for shared resources that you can adjust as demand changes.

  • Application server. A Windows Server 2012 R2 application server hosts distributed applications built using ASP.NET, Enterprise Services, and .NET Framework.

  • File and storage services. The File And Storage Services role provides essential services for managing files and the way they are made available and replicated on the network. A number of server roles require some type of file service.

  • Print and document services. The Print And Document Services role manages printer operations on the network. Windows Server 2012 R2 enables publishing printers in Active Directory and connecting to network printers by using a Uniform Resource Locator (URL), and it provides enhanced printer control through Group Policy.

  • Remote desktop services. The Remote Desktop Services role supports virtual desktops, enabling a single server to pool virtual desktops centrally and, in this way, host network access for many users. A client with a web browser, a Windows thin client, or a Remote Desktop client can access the Remote Desktop server to gain access to network resources.

  • Web server. Web servers host websites and web-based applications. Websites hosted on a web server can have both static content and dynamic content. You can build web applications hosted on a web server by using ASP.NET and .NET Framework.

Designing the Active Directory namespace

The Active Directory tree is based on a DNS domain structure, which must be implemented prior to, or as part of, installing the first Active Directory server in the forest. Each domain in the Active Directory tree is both a DNS and a Windows domain, with the associated security and administrative functionality. DNS is thoroughly integrated with Active Directory, providing location services (also called name resolution services) for domains, servers, sites, and services and constraining the structure of the Active Directory tree. It is wise to keep Active Directory in mind as you are designing the DNS namespace, and vice versa, because they are immutably linked.

Note

Active Directory trees exist within a forest, which is a collection of one or more domain trees. The first domain installed in an Active Directory forest functions as the forest root.

The interdependence of Active Directory and DNS brings some special factors into play. For example, if your organization has outward-facing DNS servers, you must decide whether you will be using your external DNS name or another DNS domain name for Active Directory. Many organizations choose not to use their external DNS name for Active Directory unless they want to expose the directory to the Internet for a business reason, such as an Internet service provider (ISP) that uses Active Directory logon servers.

Within a domain, another sort of hierarchy exists in the form of container objects called organizational units (OUs), which are used to organize and manage users, network resources, and security. An OU can contain related users, groups, or computers and other OUs.

Important

Designing the Active Directory namespace requires the participation of multiple levels of business and IT management, so be sure to provide adequate time for a comprehensive review and sign-off on domain architecture.

Managing domain trusts

Domain trusts allow automatic authentication and access to resources across domains. Active Directory automatically configures trust relationships so that each domain in an Active Directory forest trusts every other domain within that forest.

Active Directory domains are linked by a series of such transitive trust relationships between all domains in a domain tree and between all domain trees in the forest. By using Windows Server 2012 R2, you can also configure transitive trust relationships between forests.

Identifying the domain and forest functional level

Active Directory now has multiple domain and forest functional levels, each constraining the types of domain controllers that can be in use and the available feature set.

The domain functional levels are as follows:

  • Windows Server 2003 mode. When the domain is operating in Windows Server 2003 mode, the directory supports domain controllers running Windows Server 2003 and later. A domain operating in Windows Server 2003 mode can use universal groups, group nesting, group type conversion, easy domain controller renaming, update logon time stamps, and Kerberos KDC key version numbers. This functional level also supports passwords for InetOrgPerson users. InetOrgPerson users are a special type of user.

  • Windows Server 2008 mode. When the domain is operating in Windows Server 2008 mode, the directory supports Windows Server 2008 and later domain controllers. Windows Server 2003 domain controllers are no longer supported. A domain operating in Windows Server 2008 mode can use additional Active Directory features, including the DFS replication service for enhanced intersite and intrasite replication and Advanced Encryption Services (AES) 128-bit or AES 256-bit encryption for the Kerberos protocol. This level also supports the display of the last interactive logon details for users and fine-grained password policies for applying separate password and account lockout policies to users and groups.

  • Windows Server 2008 R2 mode. When the domain is operating in Windows Server 2008 R2 mode, the directory supports Windows Server 2008 R2 and later domain controllers. Windows Server 2003 and Windows Server 2008 domain controllers are no longer supported. A domain operating in Windows Server 2008 R2 mode can use Active Directory Recycle Bin, managed service accounts, Authentication Mechanism Assurance, and other important Active Directory enhancements.

  • Windows Server 2012 mode. When the domain is operating in Windows Server 2012 mode, the directory supports Windows Server 2012 and Windows Server 2012 R2 domain controllers. Windows Server 2003, Windows Server 2008, and Windows Server 2008 R2 domain controllers are no longer supported. Active Directory schema for Windows Server 2012 includes many enhancements, but only the Kerberos with Armoring feature requires this mode.

  • Windows Server 2012 R2 mode. When the domain is operating in Windows Server 2012 R2 mode, the directory supports only Windows Server 2012 R2 domain controllers. Domain controllers running earlier versions of Windows Server are no longer supported. Active Directory schema for Windows Server 2012 R2 includes incremental enhancements.

The forest functional levels are as follows:

  • Windows Server 2003. Supports domain controllers running Windows Server 2003 and later

  • Windows Server 2008. Supports domain controllers running Windows Server 2008 and later

  • Windows Server 2008 R2. Supports domain controllers running Windows Server 2008 R2 and later

  • Windows Server 2012. Supports domain controllers running Windows Server 2012 and Windows Server 2012 R2

  • Windows Server 2012 R2. Supports domain controllers running Windows Server 2012 R2

When a forest is operating at the Windows Server 2003 or higher functional level, key Active Directory features, including the following, are enabled:

  • Replication enhancements. Each changed value of a multivalued attribute is now replicated separately—eliminating the possibility for data conflict and reducing replication traffic. Additional changes include enhanced global catalog replication and application partitions (which segregate data and, thus, the replication of that data).

  • Schema. Schema objects can be deactivated, and dynamic auxiliary classes are supported.

  • Management. Forest trusts allow multiple forests to share resources easily. Active Directory domains can be renamed; thus, the Active Directory tree can be reorganized.

  • User management. Last logon time is now tracked, and enhancements to InetOrgPerson password handling are enabled.

However, to take advantage of the latest Active Directory features, your forests must operate at the Windows Server 2008 R2 or higher functional level. Selecting your domain and forest functional levels is generally straightforward. Ultimately, the decision regarding the domain and forest functional levels at which to operate mostly comes down to choosing the one that supports the domain controllers you have in place now and expect to have in the future. In most circumstances, you will want to operate at the highest possible level because it enables more functionality. Also, keep in mind that all changes to the functional level are one-way and cannot be reversed.

Defining Active Directory server roles

In addition to serving as domain controllers, a number of domain controllers fulfill special roles within Active Directory. Some of these roles provide a service to the entire forest, and others are specific to a domain or site. The Active Directory setup routine assigns and configures these roles, although you can change them later.

The Active Directory server roles are as follows:

  • Operations masters. A number of Active Directory operations must be carefully controlled to maintain the integrity of the directory structure and data. A specific domain controller serves as the operations master for each of these functions. That server is the only one that can perform certain operations related to that area. For example, you can make schema changes only on the domain controller serving as the schema master; if that server is unavailable, no changes can be made to the schema. There are two categories of operations masters:

    • Forest-level operations masters. The schema master manages the schema and enforces schema consistency throughout the directory.

    The domain-naming master controls domain creation and deletion, guaranteeing that each domain is unique within the forest.

    • Domain-level operations masters. The RID master manages the pool of relative identifiers (RIDs). (A RID is a numeric string used to construct security identifiers [SIDs] for security principals.)

    The infrastructure master handles user-to-group mappings, changes in group membership, and replication of those changes to other domain controllers.

    The PDC emulator is responsible for processing password changes and replicating password changes to other domain controllers. The PDC emulator must be available to reset and verify external trusts.

  • Global catalogs. A global catalog server provides a quick index of Active Directory objects, which a variety of network clients and processes use to locate directory objects. Global catalog servers can be heavily used, yet they must be highly available to clients, especially for user logons because the global catalog provides membership information for universal groups. Accordingly, each site in the network should have at least one global catalog server, or you should have a Windows Server 2003 or later domain controller with universal group caching enabled.

  • Bridgehead servers. Bridgehead servers manage intersite replication over low-bandwidth WAN links. Each site replicating with other sites usually has at least one bridgehead server, although a single site can have more than one if that’s required for performance reasons.

Note

Active Directory replication depends on the concept of sites, which are defined as a collected set of subnets with good interconnectivity. Replication differs, depending on whether it is within a site or between sites. Intrasite replication occurs automatically every 15 seconds; intersite replication is scheduled and usually quite a bit slower.

Planning for availability, scalability, and manageability

The enterprise depends on highly available, scalable, and manageable systems. High availability refers to the ability of the system to withstand hardware, application, or service outages while maintaining system availability. High scalability refers to the ability of the system to expand processor and memory capacity as business needs demand. High manageability refers to the ability of the system to be managed locally and remotely and the ease with which components, services, and applications can be administered.

Planning for high availability, high scalability, and high manageability is critical to the success of using Windows Server 2012 R2 in the enterprise, and you need a solid understanding of the recommendations and operating principles for deploying and maintaining high-availability servers before you deploy servers running these editions. You should also understand the types of hardware, software, and support facilities needed for enterprise computing. These concepts are all covered in this chapter.

Note

The discussion that follows focuses on achieving high availability, high scalability, and high manageability in the enterprise. Smaller organizations or business units can adopt similar approaches to meet business objectives, but they should determine the appropriate scope with budgets and available resources in mind.

Planning for software needs

Software should be chosen for its ability to support the high-availability needs of the business system. Not all software is compatible with high-availability solutions such as clustering or load balancing. Not all software must be compatible, either. Instead of making an arbitrary decision, you should let the uptime needs of the application determine the level of availability required.

An availability goal of 99 percent uptime is usual for most noncritical business systems. If an application must have 99 percent uptime, it might not need to support clustering or load balancing. To achieve 99 percent uptime, the application can have about 88 hours of downtime in an entire year or, put another way, 100 minutes of downtime a week.

To have 99.9 percent uptime is the availability goal for highly available business systems. If an application must have 99.9 percent uptime, it must support some type of high-availability solution such as clustering or load balancing. To achieve 99.9 percent uptime, the application can have less than 9 hours of downtime in an entire year or, put another way, less than 10 minutes of downtime a week.

To evaluate the real-world environment prior to deployment, you should perform integration testing on applications that will be used together. The purpose of integration testing is to ensure that disparate applications interact as expected and to uncover problem areas if they don’t. During integration testing, testers should look at system performance, overall system utilization, and compatibility. Testing should be repeated prior to releasing system or application changes to a production environment.

You should standardize the software components needed to provide system services. The goal of standardization is to set guidelines for software components and technologies that will be used in the enterprise. Standardization accomplishes the following:

  • Reduces the total cost of maintaining and updating software

  • Reduces the amount of integration and compatibility testing needed for upgrades

  • Improves recovery time because problems are easier to troubleshoot

  • Reduces the amount of training needed for administration support

Software standardization isn’t meant to limit the organization to a single specification. Over the life of a data center, new application versions, software components, and technologies will be introduced, and the organization can implement new standards and specifications as necessary. The key to success lies in ensuring that there is a standard process for deploying software updates and new technologies. The standard process must include the following:

  • Software compatibility and integration testing

  • Software support training for personnel

  • Predeployment planning

  • Step-by-step software deployment checklists

  • Postdeployment monitoring and maintenance

The following checklist summarizes the recommendations for designing and planning software for high availability:

  • Choose software that meets the availability needs of the solution or service.

  • Choose software that supports online backups.

  • Test software for compatibility with other applications.

  • Test software integration with other applications.

  • Repeat testing prior to releasing updates.

  • Create and enforce software standards.

  • Define a standard process for deploying software updates.

Planning for hardware needs

Sound hardware strategy helps increase system availability while reducing total cost of ownership and improving recovery times. Windows Server 2012 R2 is designed and tested for use with high-performance hardware, applications, and services. To ensure that hardware components are compatible, choose only components that are certified as compatible, such as those that are listed as certified for Windows Server 2012 R2 in the Windows Server Catalog (http://www.windowsservercatalog.com/).

Note

All certified components undergo rigorous testing, with a retest for firmware revisions, service pack updates, and other minor revisions. After a component is certified through testing, hardware vendors must maintain the configuration through updates and resubmit the component for testing and certification. The program requirements and the tight coordination with vendors greatly improve the reliability and availability of Windows Server 2012 R2. All hardware certified for Windows Server 2012 R2 also is fully supported in Hyper-V environments.

You should standardize on a hardware platform, and this platform should have standardized components. Standardization accomplishes the following:

  • Reduces the amount of training needed for support

  • Reduces the amount of testing needed for upgrades

  • Requires fewer spare parts because subcomponents are the same

  • Improves recovery time because problems are easier to troubleshoot

Standardization isn’t meant to restrict a data center to a single type of server. In an n-tier environment, standardization typically means choosing a standard server configuration for the front-end servers, a standard server configuration for middle-tier business logic, and a standard server configuration for back-end data services. The reason for this is that web servers, application servers, and database servers all have different resource needs. For example, although a web server might need to run on a dual-processor system with limited hardware redundant array of independent disks (RAID) control and 4 gigabytes (GBs) of RAM, a database server might need to run on an eight-way system with dual-channel RAID control and 64 GBs of RAM.

Standardization isn’t meant to limit the organization to a single hardware specification, either. Over the life of a data center, new equipment will be introduced and old equipment likely will become unavailable. To keep up with the pace of change, new standards and specifications should be implemented when necessary. These standards and specifications, like the previous standards and specifications, should be published and made available to you.

Redundancy and fault tolerance must be built into the hardware design at all levels to improve availability. You can improve hardware redundancy by using the following components:

  • Clusters. Clusters provide failover support for critical applications and services.

  • Standby systems. Standby systems provide backup systems in case of total failure of a primary system.

  • Spare parts. Spare parts ensure that replacement parts are available in case of failure.

  • Fault-tolerant components. Fault-tolerant components improve the internal redundancy of the system.

Storage devices, network components, cooling fans, and power supplies all can be configured for fault tolerance. For storage devices, you should be sure to use multiple disk controllers, hot-swappable drives, and redundant drive arrays. For network components, you should look well beyond the network adapter and consider whether fault tolerance is needed for routers, switches, firewalls, load balancers, and other network equipment.

A standard process for deploying hardware must be defined and distributed to all support personnel. The standard process must include the following:

  • Hardware compatibility and integration testing

  • Hardware support training for personnel

  • Predeployment planning

  • Step-by-step hardware deployment checklists

  • Postdeployment monitoring and maintenance

The following checklist summarizes the recommendations for designing and planning hardware for high availability:

  • Choose hardware that is listed on the Hardware Compatibility List (HCL).

  • Create and enforce hardware standards.

  • Use redundant hardware whenever possible.

  • Use fault-tolerant hardware whenever possible.

  • Provide a secure physical environment for hardware.

  • Define a standard process for deploying hardware.

If possible, add these recommendations to the preceding checklist:

  • Use fully redundant internal networks, from servers to border routers.

  • Use direct peering to major tier-1 telecommunications carriers.

  • Use redundant external connections for data and telephony.

  • Use a direct connection with high-speed lines.

Planning for support structures and facilities

The physical structures and facilities supporting your server room are critically important. Without adequate support structures and facilities, you will have problems. The primary considerations for support structures and facilities have to do with the physical environment of the servers. These considerations also extend to the physical security of the server environment.

Just as hardware and software have availability requirements, so should support structures and facilities. Factors that affect the physical environment are as follows:

  • Temperature and humidity

  • Dust and other contaminants

  • Physical wiring

  • Power supplies

  • Natural disasters

  • Physical security

Temperature and humidity should be carefully controlled at all times. Processors, memory, hard drives, and other pieces of physical equipment operate most efficiently when they are kept cool; between 65 and 70 degrees Fahrenheit is the ideal temperature in most situations. Equipment that overheats can malfunction or cease to operate altogether. Servers should have multiple, redundant internal fans to ensure that these and other internal hardware devices are kept cool.

Important

You should pay particular attention to fast-running processors and hard drives. Typically, fast-running processors and hard drives can become overheated and need additional cooling fans—even if the surrounding environment is cool.

Humidity should be kept low to prevent condensation, but the environment shouldn’t be dry. A dry climate can contribute to static electricity problems. Antistatic devices and static guards should be used in most environments.

Dust and other contaminants can cause hardware components to overheat or short out. Servers should be protected from these contaminants whenever possible. You should ensure that an air-filtration system is in place in the server room or hosting facility. The regular preventive maintenance cycle on the servers should include checking servers and their cabinets for dust and other contaminants. If dust is found, the servers and cabinets should be carefully cleaned.

Few things affect the physical environment more than wiring and cabling. All electrical wires and network cables should be tested and certified by qualified technicians. Electrical wiring should be configured to ensure that servers and other equipment have adequate power available for peak usage times. Ideally, multiple dedicated circuits should be used to provide power.

Improperly installed network cables are the cause of most communications problems. Network cables should be tested to ensure that their operation meets manufacturer specifications. Redundant cables should be installed to ensure the availability of the network. All wiring and cabling should be labeled and well maintained. Whenever possible, use cable management systems and tie wraps to prevent physical damage to wiring.

Ensuring that servers and their components have power is also important. Servers should have hot-swappable, redundant power supplies. Being hot swappable ensures that the power supply can be replaced without having to turn off the server. Redundancy ensures that if one power supply malfunctions, the other will still deliver power to the server. You should be aware that having multiple power supplies doesn’t mean that a server or hardware component has redundancy. Some hardware components require multiple power supplies to operate. In this case, an additional (third or fourth) power supply is needed to provide redundancy.

The redundant power supplies should be plugged into separate power strips, and these power strips should be plugged into separate local uninterruptible power supply (UPS) units if other backup power sources aren’t available. Some facilities have enterprise UPS units that provide power for an entire room or facility. If this is the case, redundant UPS systems should be installed. To protect against long-term outages, gas-powered or diesel-powered generators should be installed. Most hosting and collocation facilities have generators. Nevertheless, having a generator isn’t enough; the generator must be rated to support the peak power needs of all installed equipment. If the generator cannot support the installed equipment, brownouts (temporary outages) will occur.

Caution

A fire-suppression system should be installed to protect against fire. Dual gas-based systems are preferred because these systems do not harm hardware when they go off. Water-based sprinkler systems, however, can destroy hardware.

In addition, access controls should be used to restrict physical access to the server room or facility. Use locks, key cards, access codes, or biometric scanners to ensure that only designated individuals can gain entry to the secure area. If possible, use surveillance cameras and maintain recorded tapes for at least a week. When the servers are deployed in a hosting or collocation facility, ensure that locked cages are used and that fencing extends from the floor to the ceiling.

The following checklist summarizes the recommendations for designing and planning structures and facilities:

  • Maintain the temperature at 65 to 70 degrees Fahrenheit.

  • Maintain low humidity (but not dry).

  • Install redundant internal cooling fans.

  • Use an air-filtration system.

  • Check for dust and other contaminants periodically.

  • Install hot-swappable, redundant power supplies.

  • Test and certify wiring and cabling.

  • Use wire management to protect cables from damage.

  • Label hardware and cables.

  • Install backup power sources such as UPS and generators.

  • Install seismic protection and bracing.

  • Install dual gas-based fire-suppression systems.

  • Restrict physical access by using locks, key cards, access codes, and so forth.

  • Use surveillance cameras and maintain recorded tapes (if possible).

  • Use locked cages, cabinets, and racks at offsite facilities.

  • Use floor-to-ceiling fencing with cages at offsite facilities.

Planning for day-to-day operations

Day-to-day operations and support procedures must be in place before you deploy mission-critical systems. The most critical procedures for day-to-day operations involve the following activities:

  • Monitoring and analysis

  • Resources, training, and documentation

  • Change control

  • Problem escalation procedures

  • Backup and recovery procedures

  • Postmortem after recovery

  • Auditing and intrusion detection

Monitoring is critical to the success of business system deployments. You must have the necessary equipment to monitor the status of the business systems. Monitoring enables you to be proactive in system support rather than reactive. Monitoring should extend to the hardware, software, and network components but shouldn’t interfere with normal systems operations—that is, the monitoring tools chosen should require only limited system and network resources to operate.

Note

Keep in mind that collecting too much data is just as bad as not collecting any data. The monitoring tools should gather only the data required for meaningful analysis.

Without careful analysis, the data collected from monitoring is useless. Procedures should be put in place to ensure that personnel know how to analyze the data they collect. The network infrastructure is a support area that is often overlooked. Be sure you allocate the appropriate resources for network monitoring.

Resources, training, and documentation are essential to ensuring that you can manage and maintain mission-critical systems. Many organizations cripple the operations team by staffing minimally. Minimally staffed teams will have marginal response times and nominal effectiveness. The organization must take the following steps:

  • Staff for success to be successful.

  • Conduct training before deploying new technologies.

  • Keep the training up to date with what’s deployed.

  • Document essential operations procedures.

Every change to hardware, software, and the network must be planned and executed deliberately. To do this, you must have established change-control procedures and well-documented execution plans. Change-control procedures should be designed to ensure that everyone knows what changes have been made. Execution plans should be designed to ensure that everyone knows the exact steps that were or should be performed to make a change.

Change logs are a key part of change control. Each piece of physical hardware deployed in the operational environment should have a change log, which should be stored in a text document or spreadsheet or, ideally, a ticketing system that is readily accessible to support personnel. The change log should show the following information:

  • Who changed the hardware

  • What change was made

  • When the change was made

  • Why the change was made

You should have well-defined backup and recovery plans. The backup plan should specifically state the following information:

  • When full, incremental, differential, and log backups are used

  • How often and at what time backups are performed

  • Whether the backups must be conducted online or offline

  • The amount of data being backed up and how critical the data is

  • The tools used to perform the backups

  • The maximum time allowed for backup and restore

  • How backup media is labeled, recorded, and rotated

Backups should be monitored daily to ensure that they are running correctly and that the media are good. Any problems with backups should be corrected immediately. Multiple media sets should be used for backups, and these media sets should be rotated on a specific schedule. With a four-set rotation, there is one set each for daily, weekly, monthly, and quarterly backups. By rotating one media set offsite, support staff can help ensure that the organization is protected in case of a disaster.

The recovery plan should provide detailed, step-by-step procedures for recovering the system under various conditions, such as procedures for recovering from hard disk drive failure or troubleshooting problems with connectivity to the back-end database. The recovery plan should also include system design and architecture documentation that details the configuration of physical hardware, application-logic components, and back-end data. Along with this information, support staff should provide a media set containing all software, drivers, and operating system files needed to recover the system.

Note

One thing administrators often forget about is spare parts. Spare parts for key components—such as processors, drives, and memory—should be maintained as part of the recovery plan if budgeting allows.

You should practice restoring critical business systems by using the recovery plan. Practice shouldn’t be conducted on the production servers. Instead, the team should practice on test equipment with a configuration similar to the real production servers. Practicing once a quarter or semiannually is highly recommended.

You should have well-defined problem-escalation procedures that document how to handle problems and emergency changes that might be needed. Some organizations use a three-tiered help desk structure for handling problems:

  • Level 1 support staff forms the front line for handling basic problems. They typically have hands-on access to the hardware, software, and network components they manage. Their main job is to clarify and prioritize a problem. If the problem has occurred before and there is a documented resolution procedure, they can resolve the problem without escalation. If the problem is new or not recognized, they must understand how, when, and to whom to escalate it.

  • Level 2 support staff includes more specialized personnel who can diagnose a particular type of problem and work with others to resolve a problem, such as system administrators and network engineers. They usually have remote access to the hardware, software, and network components they manage. This enables them to troubleshoot problems remotely and send out technicians after they’ve pinpointed the problem.

  • Level 3 support staff includes highly technical personnel who are subject-matter experts, team leaders, or team supervisors. The Level 3 team can include support personnel from vendors and representatives from the user community. Together, they form the emergency-response or crisis-resolution team that is responsible for resolving crises and planning emergency changes.

All crises and emergencies should be responded to decisively and resolved methodically. A single person on the emergency response team should be responsible for coordinating all changes and executing the recovery plan. This same person should be responsible for writing an after-action report that details the emergency response and resolution process used. The after-action report should analyze how the emergency was resolved and the root cause of the problem.

In addition, you should establish procedures for auditing system usage and detecting intrusion. In Windows Server 2012 R2, auditing policies are used to track the successful or failed execution of the following activities:

  • Account logon events. Tracks events related to user logon and logoff

  • Account management. Tracks tasks involved with handling user accounts such as creating or deleting accounts and resetting passwords

  • Directory service access. Tracks access to the Active Directory Domain Services (AD DS)

  • Object access. Tracks system resource usage for files, directories, and objects

  • Policy change. Tracks changes to user rights, auditing, and trust relationships

  • Privilege use. Tracks the use of user rights and privileges

  • Process tracking. Tracks system processes and resource usage

  • System events. Tracks system startup, shutdown, restart, and actions that affect system security or the security log

You should have an incident-response plan that includes priority escalation of suspected intrusion to senior team members and provides step-by-step details on how to handle the intrusion. The incident-response team should gather information from all network systems that might be affected. The information should include event logs, application logs, database logs, and any other pertinent files and data. The incident-response team should take immediate action to lock out accounts, change passwords, and physically disconnect the system if necessary. All team members participating in the response should write a postmortem report that details the following information:

  • What date and time they were notified and what immediate actions they took

  • Whom they notified and what the response was from the notified individual

  • Their assessment of the issue and the actions necessary to resolve and prevent similar incidents

The team leader should write an executive summary of the incident and forward this to senior management.

The following checklist summarizes the recommendations for operational support of high-availability systems:

  • Monitor hardware, software, and network components 24/7.

  • Ensure that monitoring doesn’t interfere with normal systems operations.

  • Gather only the data required for meaningful analysis.

  • Establish procedures that let personnel know what to look for in the data.

  • Use outside-in monitoring any time systems are externally accessible.

  • Provide adequate resources, training, and documentation.

  • Establish change-control procedures that include change logs.

  • Establish execution plans that detail the change implementation.

  • Create a solid backup plan that includes onsite and offsite tape rotation.

  • Monitor backups and test backup media.

  • Create a recovery plan for all critical systems.

  • Test the recovery plan on a routine basis.

  • Document how to handle problems and make emergency changes.

  • Use a three-tier support structure to coordinate problem escalation.

  • Form an emergency-response or crisis-resolution team.

  • Write after-action reports that detail the process used.

  • Establish procedures for auditing system usage and detecting intrusion.

  • Create an intrusion response plan with priority escalation.

  • Take immediate action to handle suspected or actual intrusion.

  • Write postmortem reports detailing team reactions to the intrusion.

Planning for deploying highly available servers

You should always create a plan before deploying a business system. The plan should show everything that must be done before the system is transitioned into the production environment. After a system is in the production environment, the system is deemed operational and should be handled as outlined in “Planning for day-to-day operations” earlier in this chapter.

The deployment plan should include the following items:

  • Checklists

  • Contact lists

  • Test plans

  • Deployment schedules

Checklists are a key part of the deployment plan. The purpose of a checklist is to ensure that the entire deployment team understands the steps the members need to perform. Checklists should list the tasks that must be performed and designate individuals to handle the tasks during each phase of the deployment—from planning to testing to installation. Prior to executing a checklist, the deployment team should meet to ensure that all items are covered and that the necessary interactions among team members are clearly understood. After deployment, the preliminary checklists should become part of the system documentation, and new checklists should be created any time the system is updated.

The deployment plan should include a contact list that provides the name, role, telephone number, and email address of all team members, vendors, and solution-provider representatives. Alternative numbers for cell phones and pagers should also be provided.

The deployment plan should include a test plan. An ideal test plan has several phases. In Phase I, the deployment team builds the business system and support structures in a test lab. Building the system means accomplishing the following tasks:

  • Creating a test network on which to run the system

  • Putting together the hardware and storage components

  • Installing the operating system and application software

  • Adjusting basic system settings to suit the test environment

  • Configuring clustering, network load balancing, or another high-availability solution if necessary

The deployment team can conduct any necessary testing and troubleshooting in the isolated lab environment. The entire system should undergo burn-in testing to guard against faulty components. If a component is flawed, it usually fails in the first few days of operation. Testing doesn’t stop with burn-in. Web and application servers should be stress tested. Database servers should be load tested. The results of the stress and load tests should be analyzed to ensure that the system meets the performance requirements and expectations of the customer. Adjustments to the configuration should be made to improve performance and optimize the configuration for the expected load.

In Phase II, the deployment team tests the business system and support equipment in the deployment location. Team members conduct similar tests as before, but in the real-world environment. Again, the results of these tests should be analyzed to ensure that the system meets the performance requirements and expectations of the customer. Afterward, adjustments should be made to improve performance and optimize as necessary. The team can then deploy the business system.

After deployment of the server or servers, the team should perform limited, nonintrusive testing to ensure that the system is operating normally. After Phase III testing is completed, the team can use the operational plans for monitoring and maintenance.

The following checklist summarizes the recommendations for predeployment planning of mission-critical systems:

  • Create a plan that covers the entire testing-to-operations cycle.

  • Use checklists to ensure that the deployment team understands the procedures.

  • Provide a contact list for the team, vendors, and solution providers.

  • Conduct burn-in testing in the lab.

  • Conduct stress and load testing in the lab.

  • Use the test data to optimize and adjust the configuration.

  • Provide follow-on testing in the deployment location.

  • Follow a specific deployment schedule.

  • Use operational plans after final tests are completed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset