Chapter 12. Storage management essentials

Essential storage technologies

Installing and configuring file services

Data is stored throughout the enterprise on a variety of systems and storage devices, the most common of which are hard disk drives but also can include storage-management devices and removable media devices. Managing and maintaining the myriad systems and storage devices are the responsibilities of administrators. If a storage device fails, runs out of space, or encounters other problems, serious negative consequences can result. Servers could crash, applications could stop working, and users could lose data, all of which affects the productivity of users and the organization’s bottom line. You can help prevent such problems and losses by implementing sound storage-management procedures that enable you to evaluate your current and future storage needs and help you meet current and future performance, capacity, and availability requirements. You then must configure storage appropriately for the requirements you’ve defined.

Essential storage technologies

One of the few constants in Microsoft Windows operating system administration is that data storage needs are ever increasing. It seems that only a few years ago a 2-terabyte (TB) hard disk was huge and something primarily reserved for Windows servers rather than Windows workstations. Now Windows workstations ship with large hard disks as standard equipment, and some even ship with striped drives that allow workstations to have spanned drives that have multi-terabyte volumes—and all of that data must be backed up and stored somewhere other than on the workstations to protect it. This has meant that back-end storage solutions have had to scale dramatically as well. Server solutions that were once used for enterprise-wide implementations are now being used increasingly at the departmental level, and the underlying architecture for the related storage solutions has had to change dramatically to keep up.

Using internal and external storage devices

To help meet the increasing demand for data storage and changing requirements, organizations are deploying servers with a mix of internal and external storage. In internal-storage configurations, drives are connected inside the server chassis to a local disk controller and are said to be directly attached. You’ll sometimes see an internal storage device referred to as direct-attached storage (DAS).

In external-storage configurations, servers connect to external, separately managed collections of storage devices that are either network-attached or part of a storage area network. Although the terms network-attached storage (NAS) and storage-area network (SAN) are sometimes used as if they are one and the same, the technologies differ in how servers communicate with the external drives.

NAS devices are connected through a regular Transmission Control Protocol/Internet Protocol (TCP/IP) network. All server-storage communications go across the organization’s local area network (LAN), as shown in Figure 12-1, and typically use file-based protocols for communications, which can include Server Message Block (SMB), distributed file system (DFS), and Network File System (NFS). This means the available bandwidth on the network can be shared by clients, servers, and NAS devices. For best performance, the network should be running at 1 gigabit per second (Gbps) or higher. Networks operating at slower speeds can experience a serious decrease in performance as clients, services, and storage devices try to communicate using the limited bandwidth.

A diagram of a NAS. All server-storage communications go across the organization’s local area network (LAN) and typically use file-based protocols for communications, which can include SMB, DFS, and NFS.

Figure 12-1. In a NAS, server-storage communications are on the LAN.

A SAN typically is physically separate from the LAN and is independently managed. As shown in Figure 12-2, this isolates the server-to-storage communications so that traffic doesn’t affect communications between clients and servers. Several SAN technologies are implemented, including Fibre Channel Protocol (FCP), a more traditional SAN technology that delivers high reliability and performance, and Internet SCSI (iSCSI), which delivers good reliability and performance at a lower cost than Fibre Channel. As the name implies, iSCSI uses TCP/IP networking technologies on the SAN so that servers can communicate with storage devices by using IP. The SAN is still isolated from the organization’s LAN.

A diagram of a SAN, which is typically physically separate from the LAN and is independently managed. This isolates the server-to-storage communications so that traffic doesn’t affect communications between clients and servers.

Figure 12-2. In a SAN, server-storage communications don’t affect communications between clients and servers.

You should be aware that iSCSI uses traditional IP facilities to transfer data over LANs, wide area networks (WANs), or the Internet. Here, iSCSI clients (initiators) send Small Computer System Interface (SCSI) commands to targeted iSCSI storage devices (targets) on remote servers. iSCSI consolidates storage and allows hosts—which can include web, application, and database servers—to access the storage as if it were locally attached. Initiators can locate storage resources by using Internet Storage Name Service (iSNS). iSNS isn’t required for communications, but it does provide management services similar to those for Fibre Channel networks. iSNS emulates the fabric services of Fibre Channel and can manage both Fibre Channel and iSCSI devices.

Although Fibre Channel requires special cabling, iSCSI uses standard Ethernet cabling and technically can operate over the same network as standard IP traffic. However, if iSCSI isn’t operated on a dedicated network or subnet, performance can be severely degraded.

With TCP/IP, TCP is the transport protocol for IP networks. With Fibre Channel, FCP is a transport protocol used to transport SCSI commands over the Fibre Channel network. Fibre Channel networks can use a variety of topologies, including the following:

  • Point-to-point (FC-PTP), where two devices are connected directly

  • Arbitrated loop (FC-AL), where all devices are in a ring, similar to token ring networking

  • Switched fabric (FC-SW), where all devices or device rings are connected to switches, similar to Ethernet

The standard model for Fibre Channel has five layers:

  • FC0, the physical layer, which includes cables and connectors

  • FC1, the data-link layer

  • FC2, the network layer

  • FC3, the common services layer

  • FC4, the protocol-mapping layer

Windows Server 2012 R2 includes support for Fibre Channel over Ethernet (FCoE), a technology that allows IP network and SAN data traffic to be consolidated on a single network. FCoE encapsulates Fibre Channel frames over Ethernet and supports 10 Gbps and higher networks. With FCoE, the FC0 and FC1 layers of the Fibre Channel model are replaced with Ethernet, and FCoE operates in the FC2, or network, layer. This is different from iSCSI, which runs on top of TCP and IP. In addition, although iSCSI is routable across IP networks, FCoE isn’t routable in the IP layer and won’t work across routed IP networks.

You should also note that although Fibre Channel has priority-based flow controls, these controls aren’t part of standard Ethernet. As a result, both FCoE and iSCSI needed enhancements to support priority-based flow controls and prevent the frame loss that might occur otherwise. These enhancements, provided in the Data Center Bridging suite of Institute of Electrical and Electronics Engineers (IEEE) standards, include the encapsulation of native frames, extensions to Ethernet to prevent frame loss, and mapping between ports/IDs and Ethernet media access control (MAC) addresses.

Several competing network protocols are available to provide fabric functionality to Fibre Channel devices over an IP network and to make the technology work over long distances. One is called Internet Fibre Channel Protocol (iFCP). iFCP uses gateways and routing to enable connectivity and TCP for error detection and correction and for congestion control. A similar technology, called Fibre Channel over IP (FCIP), also is available. FCIP uses storage tunneling, by which Fibre Channel frames are encapsulated and then forwarded over an IP network, using TCP.

Storage-management features and tools

Windows Server 2012 R2 includes many features for working with SANs and handling storage management in general. Volume Shadow Copy Service (VSS) enables administrators to create point-in-time copies of volumes and individual files called snapshots. This makes it possible to back up these items while files are open and applications are running and to restore them to a specific point in time. You can use VSS also to create point-in-time copies of documents on shared folders. These copies are called shadow copies.

Note

Users can recover their own files when VSS is enabled. After you configure shadow copy, point-in-time backups of documents contained in the designated shared folders are created automatically, and users can quickly recover files that have been deleted or unintentionally altered as long as the Shadow Copy Client has been installed on their computer.

The basic VSS functionality is built into the file and storage services and accessed through the File Server VSS provider. You can extend the basic functions in several ways. One of these ways is to add the File Server VSS Agent Service. You use this role service to create consistent snapshots of server application data such as virtual machine files from Hyper-V. You install the agent service on a file server when you want to back up applications that are storing data files on the file server. Here, you are backing up application data stored on file shares, which is different from user data stored on file shares (which is managed using the standard File Server VSS provider).

Windows Server 2012 R2 also includes storage providers. Storage providers make it possible for storage devices from multiple vendors to interoperate. To do this, Microsoft provides Storage Management application programming interfaces (APIs) that management tools and storage hardware can use, allowing for a unified interface for managing storage devices from multiple vendors and making it easier for administrators to manage a mixed-storage environment. Standard storage providers are built into the file and storage services.

Windows Server 2012 R2 also supports the Storage Management Initiative (SMI-S) standard and storage providers that comply with this standard. Add this support by adding the Windows Standards-Based Storage Management feature. This feature enables the discovery, management, and monitoring of storage devices, using management tools that support the SMI-S standard. It does this by installing related Windows Management Instrumentation (WMI) classes and cmdlets.

When your file servers are using iSCSI, Fibre Channel, or both storage device types, you might also want to install Multipath I/O, iSNS Server service, and Data Center Bridging—all of which are installable features.

Multipath I/O supports SAN connectivity by establishing multiple sessions or connections to storage devices. Using Multipath I/O, you can configure as many as 32 physical paths to external storage devices that can be used simultaneously and load balanced if necessary. The purpose of having multiple paths is to have redundancy and possibly increased throughput. If you also have multiple host bus adapters, you improve the chances of recovery from a path failure. However, if a path failure occurs, there might be a short period of time when the drives on the SAN aren’t accessible. Microsoft Multipath I/O (MPIO) supports iSCSI, Fibre Channel, and Serial Attached SCSI (SAS).

iSNS Server service helps iSNS clients discover iSCSI storage devices on an Ethernet network and automates the management and configuration of iSCSI and Fibre Channel storage devices (as long as Fibre Channel devices use iFCP gateways). Data Center Bridging helps manage bandwidth allocation for offloaded storage traffic on converged network adapters, which is useful with iSCSI and FCoE.

Other file and storage features you might want to install on file servers include the following:

  • Enhanced Storage. Supports additional functions made available by devices that support hardware encryption and enhanced storage. Enhanced storage devices support IEEE standard 1667 to provide enhanced security, which can include authentication at the hardware level of the storage device.

  • Windows Search Service. Allows for faster file searches for resources on the server from clients that are compatible with this service. Keep in mind, however, that this feature is designed primarily for desktop and small office implementations (and not for large enterprises).

  • Windows Server Backup. The standard backup utility included with Windows Server 2012 R2.

Server Manager is your primary tool for managing storage. Windows Server 2012 R2 also has several command-line tools for managing local storage and storage-replication services. These tools include the following:

  • DiskPart. Used to manage basic and dynamic disks and the partitions and volumes on those disks. It is the command-line counterpart to the Disk Management tool and includes features not found in the graphical user interface (GUI) tool, such as the capability to extend partitions on basic disks.

    Note

    DiskPart cannot be used to manage Storage Spaces. Windows 8.1 and Windows Server 2012 R2 might be the last versions of Windows to support Disk Management, DiskPart, and DiskRaid. The Virtual Disk Service (VDS) COM interface is being superseded by the Storage Management API. You can continue to use Disk Management and DiskPart to manage basic and dynamic disks.

  • Dfsdiag. Used to perform troubleshooting and diagnostics for DFS.

  • Dfsradmin. Used to manage and monitor DFS replication throughout the enterprise. You also use this tool for troubleshooting and diagnosing problems. This tool replaces Health_Chk and the other tools it worked with.

  • Dfsutil. Used to configure DFS, back up and restore DFS directory trees (namespaces), copy directory trees, and troubleshoot DFS.

  • Fsutil. Used to get detailed drive information and perform advanced file system maintenance. You can manage sparse files and reparse points, disk quotas, and other advanced features of NTFS.

  • Mountvol. Used to manage volume automounting. By using volume mount points, administrators can mount volumes to empty NTFS folders, giving the volumes a drive path rather than a drive letter. This means it is easier to mount and unmount volumes, particularly with SANs.

  • Vssadmin. Used to view and manage the Volume Shadow Copy Service (VSS) and its configuration.

Many Windows PowerShell cmdlets are available for managing storage as well. These cmdlets are module-specific and correspond to the storage component you want to manage. Available modules include the following:

  • BitsTransfer. Used to manage the Background Intelligent Transfer Service (BITS).

  • BranchCache. Used to configure and check the status of Windows BranchCache.

  • DFSN. Used to manage DFS namespaces.

  • FileServerResourceManager. Used to manage File Server Resource Manager.

  • iSCSI. Used to manage iSCSI connections, sessions, targets, and ports.

  • IscsiTarget. Used to mount and manage iSCSI virtual disks.

  • SmbShare. Used to configure and check the status of standard file sharing.

  • Storage. Used to manage disks, partitions, and volumes in addition to storage pools and Storage Spaces. It cannot be used to manage dynamic disks.

You learn more about the technologies behind these modules later in this chapter. The easiest way to learn more about these Windows PowerShell modules is to examine how their associated cmdlets work. You list the cmdlets associated with a module by using:

get-command -module ModuleName

Here, ModuleName is the name of the module you want to examine, such as the following:

get-command -module iscsi

After you list the cmdlets associated with an imported module, you can get more information about a particular cmdlet by using:

get-help CmdletName -detailed

Here, CmdletName is the name of the cmdlet to examine in detail, such as the following:

get-help connect-iscsitarget –detailed

Storage-management role services

You use File And Storage Services to configure your file servers. Several file and storage services are installed by default with any installation of Windows Server 2012 R2. These include File Server, which you use to manage file shares that users can access over the network, and Storage Services, which you use to manage various types of storage, including storage pools and Storage Spaces. Storage pools group disks so that you can create virtual disks from the available capacity. Each virtual disk you create is a storage space. You learn how to work with storage pools and Storage Spaces in Chapter 17.

Windows Server 2012 R2 also supports thin provisioning of Storage Spaces. With thin provisioning, you can create large virtual disks without having the actual space available. This enables you to provision storage to meet future needs and grow storage as needed. You also can reclaim storage that is no longer needed by trimming storage. To see how thin provisioning works, consider the following scenarios:

  • Your file server is connected to a storage array with 2 TBs of actual storage but with the capability to grow to 10 TBs as needed (by installing additional hard disks). When you set up storage, you provision it as if additional storage were already available. One way to do this is to create a storage pool that has a total size of 10 TBs and then create five thin disks with 2 TBs of storage each.

  • Your eight file servers are connected to a SAN with 10 TBs of actual storage but with the capability to grow to 80 TBs as needed (by installing additional hard disks). When you set up storage, you provision it as if additional storage were already available. One way to do this is to create a storage pool on each file server that has a total size of 10 TBs. Next, within each storage pool, you create five thin disks with 2 TBs of storage each.

With thin-disk provisioning, volumes use space from the storage pool as needed, up to the volume size. Here, the actual storage utilization for a volume is based on the total size of the data stored on the volume. If a volume doesn’t grow, the storage space is never allocated and isn’t wasted.

Contrast this with fixed-disk provisioning, by which a volume has a fixed size and uses space from the storage pool equal to its volume size. Here, the storage utilization for a volume is fixed and based on the total size of the volume itself. Because the storage is pre-allocated with a fixed size, any unused space isn’t available for other volumes.

You can enhance file storage in many ways by using the additional role services that are available for File And Storage Services. One of the first role services you might consider using is BranchCache For Network Files. You add the BranchCache For Network Files role service to enable enhanced support for Windows BranchCache on your file servers and to optimize data transfer over the WAN for caching.

Windows BranchCache is a file-caching feature that works in conjunction with BITS. By enabling branch caching in Group Policy, you enable computers to retrieve documents and other types of files from a local cache rather than retrieving files from servers over the network. This improves response times and reduces transfer times.

Branch caching can be used in either a distributed cache mode or a hosted cache mode. With the distributed cache mode, desktop computers running compatible versions of Windows host and send distributed file caches, and caching servers running at remote offices are not needed. With the hosted cache mode, compatible file servers at remote offices host local file caches and send them to clients. Generally, whether distributed or hosted, the caches at one office location are separate from caches at other office locations. That said, the Active Directory configuration and the way Group Policy is applied ultimately determine whether computers are considered part of one office location or another.

Branch caching is designed as a WAN solution. It optimizes bandwidth usage for files transferred with either SMB or Hypertext Transfer Protocol (HTTP). Your content servers can be located anywhere on your network and in public or private cloud data centers. You enable branch caching on web servers and BITS-based application servers by adding the BranchCache feature. If you are deploying hosted cache servers, you add the BranchCache feature to these servers as well. You don’t install this feature on your file servers, however. Instead, you add the BranchCache For Network Files role service.

The Data Deduplication service can be installed with or without the BranchCache For Network Files role service. Data Deduplication uses subfile, variable-size chunking and compression to achieve higher storage efficiency. The service does this by segmenting files into 32 KB to 128 KB chunks, identifying duplicate chunks, and replacing the duplicates with references to a single copy. Because optimized files are stored as reparse points, files on the volume are no longer stored as data streams. Instead, they are replaced with stubs that point to data blocks within a common chunk store.

Previously, I mentioned the File Server VSS Agent Service, which you install on file servers when you want to ensure that you can make consistent backups of server application data using VSS-aware backup applications. When working with iSCSI, you also must install the iSCSI target VSS hardware provider on the initiator server you use to perform backups of iSCSI virtual disks. This ensures that the snapshots are application-consistent and can be restored at the logical unit number (LUN) level. If you don’t use the iSCSI target VSS hardware provider on the initiator, server backups might not be consistent, and you might not be able to recover your iSCSI virtual disks completely. On management computers running storage-management applications, you must install the iSCSI target Virtual Disk Service (VDS) hardware provider. The iSCSI target VSS hardware provider and the iSCSI target VDS hardware provider are part of the iSCSI Target Storage Provider role service.

Another role service you might want to use with iSCSI is the iSCSI Target Server service. This role service turns any computer running Windows Server into a network-accessible block storage device. You can use this continuously available block storage to support network/diskless boot, shared storage on non-Windows iSCSI initiators, and development environments where you need to test applications prior to deploying them to SAN storage. Because the service uses standard Ethernet for its transport, no additional hardware is needed.

Although SMB is the default file-sharing protocol, other file-sharing solutions are available, including Network File System (NFS) and DFS. To enable NFS on your file servers, you add the Server For NFS service. This service provides a file-sharing solution for enterprises with mixed Windows and UNIX environments. When you install Server For NFS, users can transfer files between Windows Server and UNIX operating systems by using the NFS protocol. DFS, however, isn’t an interoperability solution. Instead, DFS is a robust, enterprise solution for file sharing that you can use to create a single directory tree that includes multiple file servers and their file shares.

The DFS tree can contain more than 5,000 shared folders in a domain environment (or 50,000 shared folders on a standalone server), located on different servers, enabling users to find files or folders easily that are distributed across the enterprise. DFS directory trees can also be published in Active Directory Domain Services so that they are easy to search.

DFS has two key components:

  • DFS Namespaces. You can use DFS Namespaces to group shared folders located on different servers into one or more logically structured namespaces. Each namespace appears as a single shared folder with a series of subfolders. However, the underlying structure of the namespace can come from shared folders on multiple servers at different sites.

  • DFS Replication. You can use DFS Replication to synchronize folders on multiple servers across local or wide area network connections by using a multimaster replication engine. The replication engine uses the Remote Differential Compression (RDC) protocol to synchronize only the portions of files that have changed since the last replication.

You can use DFS Replication with DFS Namespaces or by itself. When a domain is running in a Windows 2008 domain functional level or higher, domain controllers use DFS Replication to replicate the SYSVOL directory.

File Server Resource Manager (FSRM) installs a suite of tools that administrators can use to manage data stored on servers better. Using FSRM, you can do the following:

  • Define file-screening policies. You use file-screening policies to block unauthorized, potentially malicious types of content. You can configure active screening, which does not allow users to save unauthorized files, or passive screening, which allows users to save unauthorized files but monitors or warns about usage (or you can configure both).

  • Configure Resource Manager disk quotas. By using Resource Manager disk quotas, you can manage disk space usage by folder and by volume. You can configure quotas with a specific limit as a hard limit (meaning a limit can’t be exceeded) or a soft limit (meaning a limit can be exceeded).

  • Generate storage reports. You can generate storage reports as part of disk-quota and file-screening management. Storage reports identify file usage by owner, type, and other parameters. They also help identify users and applications that violate screening policies.

You learn more about FRSM in Chapter 20.

Booting from SANs and using SANs with clusters

Windows Server 2012 R2 supports booting from a SAN, having multiple clusters attached to the same SAN, and having a mix of clusters and standalone servers attached to the same SAN. To boot from a SAN, the external storage devices and the host bus adapters of each server must be configured appropriately to allow booting from the SAN.

When multiple servers must boot from the same external storage device, you must either configure the SAN in a switched environment or attach it from each host to one of the storage subsystem’s Fibre Channel ports. A switched or direct-to-port environment allows the servers to be separate from one another, which is essential for booting from a SAN.

Each server on the SAN must have exclusive access to the logical disk from which it is booting, and no other server on the SAN should be able to detect or access that logical disk. For multiple-cluster installations, the SAN must be configured so that a set of cluster disks is accessible by only one cluster and is completely hidden from the rest of the clusters. By default, Windows Server 2012 R2 will attach and mount every logical disk that it detects when the host bus adapter driver loads, and if multiple servers mount the same disk, the file system can be damaged.

To prevent file system damage, the SAN must be configured so that only one server can access a particular logical disk at a time. You can configure disks for exclusive access by using a type of LUN management such as LUN masking, LUN zoning, or a preferred combination of these techniques. You can use the File And Storage Services node in the Server Manager console to manage Fibre Channel and iSCSI SANs that support Storage Management APIs and have a configured storage provider.

Working with SMB 3.0

Server Message Block (SMB) is the standard technology used for file sharing. SMB 3.0 was released with Windows 8 and Windows Server 2012 and has been revised as SMB 3.02 with Windows 8.1 and Windows Server 2012 R2. Earlier releases of Windows support different versions of SMB. Windows 7 and Windows Server 2008 R2 support SMB 2.1. Windows Vista and Windows Server 2008 support SMB 2.0.

SMB 2.1 was an incremental improvement over SMB 2.0, which brought several important changes for file sharing, including support for BranchCache and large maximum transmission units (MTUs). SMB 3.0 has the following important improvements:

  • SMB Direct. Provides support for network adapters that have Remote Direct Memory Access (RDMA) capability, allowing fast, offloaded data transfers and helping achieve high speeds and low latency while using few CPU resources. Previously, this capability was one of the key advantages of Fibre Channel block storage.

  • SMB encryption. Provides secure data transfer by encrypting data automatically and without having to deploy Internet Protocol security (IPsec) or another encryption solution. SMB encryption can be enabled for an entire server (meaning for all its file shares) or for individual file shares as needed.

  • SMB Multichannel. Allows servers to use multiple connections and network interfaces simultaneously, increasing fault tolerance and throughput. Configure network interface card (NIC) teaming to take advantage of this feature.

  • SMB scale-out. Allows clustered file servers in an active-active configuration to aggregate bandwidth across the cluster. This provides simultaneous access to data files through all nodes in the cluster and enables administrators to load balance across cluster nodes simply by moving file server clients.

  • SMB signing. Introduces AES-CCM and AES-CMAC for signing. Typically, signing with Advanced Encryption Standard (AES) is dramatically faster than signing with HMAC-SHA256 (which SMB 2/SMB 2.1 used).

  • SMB Transparent Failover. Enables administrators to perform maintenance on nodes in a clustered file server without affecting applications storing data on the server’s file shares. If a failure occurs, SMB clients transparently reconnect to another cluster node. This provides the benefits of a multicontroller storage array without having to purchase one.

Note

You not only can use the SMB Direct, SMB Multichannel, and SMB scale-out features to implement manageable, scalable active-active file shares, you also can use these features to share an existing Fibre Channel SAN’s storage over SMB 3.0. This gives you a gateway to a SAN and extends your storage options.

Keep in mind that SMB is a client/server technology. For backward compatibility, newer clients continue to support older versions of the technology. While establishing a connection to a file share, an SMB client negotiates the SMB version to use for that connection based on the highest commonly supported SMB version. This process is referred to as dialect negotiation.

During dialect negotiation, the version downgrade is automatic, such that an SMB 3.0 client connecting to an SMB 2.1 server will use SMB 2.1 for that connection. Because earlier versions of SMB are less secure, forcing a client to downgrade the version used is one way someone might try to gain unauthorized access.

SMB 3.0 includes a security feature that attempts to detect forced downgrade attempts. If such an attempt is detected, the connection is disconnected, and Event ID 1005 is logged in the Microsoft-Windows-SmbServer/Operational log. This security feature works only when a client tries to force a downgrade from SMB 3.0 to SMB 2.0/SMB 2.1. It doesn’t work if a client attempts to downgrade to SMB 1.0. For this reason, Microsoft recommends disabling support for SMB 1.0, which is only used by early Windows operating systems.

SMB 3.02 adds several improvements over SMB 3.0, including the following:

  • Performance enhancements for SMB Direct

  • Support for multiple SMB instances on scale-out file servers

  • Automatic rebalancing of client connections to scale-out file servers

If you want to ensure that SMB encryption is used whenever possible, you can enable SMB encryption on either a per-server or per–file share basis. To enable encryption for an entire server and all its SMB file shares, run the following command at an elevated Windows PowerShell prompt on the server:

Set-SmbServerConfiguration –EncryptData $true

To enable encryption for a specific file share rather than an entire server, run the following command at an elevated Windows PowerShell prompt on the server:

Set-SmbShare –Name ShareName -EncryptData $true

Here, ShareName is the name of the share for which encryption should be used when possible, such as the following:

Set-SmbShare –Name CorpData -EncryptData $true

You also can turn on encryption when you create a share as well. To do this, run the following command at an elevated Windows PowerShell prompt on the server:

New-SmbShare –Name ShareName -Path PathName –EncryptData $true

Here, ShareName is the name of the share for which encryption should be used when possible, and PathName is the path to an existing folder to share, such as the following:

New-SmbShare –Name CorpData -Path D:Data –EncryptData $true

When you want to enable encryption support on multiple file servers, you can invoke remote commands. Consider the following example:

$servers = get-content c:filesserver-list.txt
Invoke-command -computername $servers -scriptblock {Set-SmbServerConfiguration
–EnableSMB1Protocol $false}

Here, C:FilesServer-list.txt is the path to a text file containing a list of the file servers to configure. In this file, each file server should be listed on a separate line, as shown here:

FileServer12
FileServer23
FileServer45

The command will then be invoked on each of the file servers.

Installing and configuring file services

File servers are central repositories for an organization’s data. As you seek to manage and distribute the data stored on your organization’s file servers, you might find that you need to optimize file and storage services. Although basic file and storage services are installed by default on servers running Windows Server 2012 R2, you must specifically configure other services and features as they’re needed. Use the Add Roles And Features Wizard in Server Manager to add the appropriate role services and features and then use the related management tools to configure the role services and features as needed.

Configuring the File And Storage Services role

You can add role services and features to a file server by following these steps:

  1. In Server Manager, tap or click Manage and then tap or click Add Roles And Features or select Add Roles And Features in the Quick Start pane. This starts the Add Roles And Features Wizard. If the wizard displays the Before You Begin page, read the Welcome text and then tap or click Next.

    Note

    Beginning with Windows Server 2012, binary source files for roles, role services, and features can be removed to enhance security. If the binaries for the tools you want to use have been removed, you need to install the tools by specifying a source. For more information about role and feature binaries, see Chapter 6.

  2. On the Select Installation Type page, Role-Based Or Feature-Based Installation is selected by default. Tap or click Next.

  3. On the Select Destination Server page, you can choose to install roles and features on running servers or virtual hard disks. After you make your selection, do one of the following and then tap or click Next:

    1. Select the server that you want to configure. Keep in mind that only servers running Windows Server 2012 R2 and that have been added for management in Server Manager are listed.

    2. Select the server host to use and then type the UNC path to the offline virtual hard disk (VHD) file on that server, as shown in Figure 12-3. Keep in mind that Windows Server 2012 R2 must already be installed on the VHD. Alternatively, tap or click Browse and then use the Browse For Virtual Hard Disks dialog box to locate the offline VHD.

    A screen shot of the Server Selection page. If you are adding roles and features to a VHD, choose Select A Virtual Hard Disk and then specify the UNC path to the VHD.

    Figure 12-3. If you are adding roles and features to a VHD, specify the UNC path to the VHD.

  4. On the Select Server Roles page, select File And Storage Services. Expand the related node and select the additional role services to install, as shown in Figure 12-4. If additional features are required to install a role, you see an additional dialog box. Tap or click Add Features to close the dialog box and add the required features to the server installation. When you are ready to continue, tap or click Next.

    A screen shot of the Server Roles page, where you can select the role services you wish to install on the selected server.

    Figure 12-4. Select the appropriate role services for the file server.

  5. On the Select Features page, shown in Figure 12-5, select any features you want to install. If additional features are required to install a feature you selected, you see an additional dialog box. Tap or click Add Features to close the dialog box and add the required features to the server installation. When you are ready to continue, tap or click Next.

    A screen shot of the Features page, where you can select additional features you want to install on the selected server.

    Figure 12-5. Select the additional features for the file server.

  6. On the Confirm Installation Selections page, tap or click the Export Configuration Settings link to generate an installation report that can be displayed in Internet Explorer.

  7. If the server on which you want to install roles or features doesn’t have all the required binary source files, the server gets the files through Windows Update by default or from a location specified in Group Policy. You also can specify an alternate path for the required source files. To do this, click the Specify An Alternate Source Path link, type that alternate path in the box provided, and then tap or click OK. For network shares, enter the UNC path to the share, such as \CorpServer14WinServer2012. For mounted Windows images, enter the Windows Imaging (WIM) path prefixed with WIM and including the index of the image to use, such as WIM:\CorpServer14WinServer2012install.wim:4.

  8. After you review the installation options and save them as necessary, tap or click Install to begin the installation process. The Installation Progress page tracks the progress of the installation. If you close the wizard, tap or click the Notifications icon in Server Manager and then tap or click the link provided to reopen the wizard.

  9. When Setup finishes installing the server with the roles and features you selected, the Installation Progress page is updated to reflect this. Review the installation details to ensure that all phases of the installation were completed successfully. Note any additional actions that might be required to complete the installation, such as restarting the server or performing additional installation tasks. If any portion of the installation failed, note the reason for the failure. Review the Server Manager entries for installation problems and take corrective actions as appropriate.

Configuring multipath I/O

Hardware vendors typically supply a Device Specific Module (DSM) for SAN hardware and software for configuring multipath I/O. That said, the Multipath I/O feature includes the Microsoft DSM and some basic configuration options. The Microsoft DSM supports the Active/Active controller model and the asymmetric logical unit access controller model. It also implements path selection policies failover, failback, and load balancing. Failover policies enable you to configure a secondary path that should be used if a preferred path fails. If you want the preferred path to be used automatically when it becomes operational again, you can configure a failback policy.

Several types of load-balancing policies are available, including round-robin, dynamic least queue depth, and weighted path. With round-robin, you can configure the DSM to use all available I/O paths in a balanced, round-robin fashion. With dynamic least queue depth, you can configure the DSM to route I/O to the path with the smallest number of outstanding requests. With weighted path, you assign each path a weight to indicate its relative priority with regard to a particular application, and the DSM selects the path with the least weight among the available paths.

Devices that support the Active/Active controller model are referred to as Active/Active devices and, by default, are configured to use round-robin. Generally, devices that support the asymmetric logical unit access (ALUA) controller model comply with the SCSI Primary Commands-3 (SPC-3) standard or later and, by default, are configured to use failover.

You manage the multipath I/O (MPIO) configuration using the MPIO Properties dialog box, the Mpclaim command-line tool, or the cmdlets of the MPIO module in Windows PowerShell. After you install the Multipath I/O feature by using the Add Roles And Features Wizard, these tools are available on the server. You open the MPIO Properties dialog box, shown in Figure 12-6, by selecting MPIO on the Tools menu in Server Manager.

Note

You can get a list of the available cmdlets for working with MPIO by typing get-command -module mpio at a Windows PowerShell prompt.

A screen shot of the MPIO Properties dialog box, showing the MPIO Devices tab.

Figure 12-6. Manage the multipath I/O configuration.

After you enable MPIO, you might also want to do the following:

  • Enable automatic claiming of iSCSI devices for MPIO.

  • Set the default load-balancing policy.

  • Set the Windows disk timeout.

For MPIO to manage a device, you must first add the hardware ID for the device to MPIO. You can add devices either manually or automatically.

Automatic claiming of iSCSI devices allows MPIO to configure available iSCSI devices with multiple paths automatically. Enable this feature by entering the following at an elevated Windows PowerShell prompt:

Enable-MSDSMAutomaticClaim -BusType iSCSI

Load balancing and fault tolerance are core features of MPIO. You set the default load-balancing policy by using Get-MSDSMGlobalDefaultLoadBalancePolicy. The default policies available are:

  • Failover only, which allows one active path, with all other paths designated as standby paths for failover. Use the FOO value.

  • Round-robin, which sets all available paths to be load-balanced using a round-robin technique. Use the RR value.

  • Least queue path, which load-balances by sending I/O to the path with the fewest I/O requests. Use the LQD value.

  • Least blocks, which load-balances by sending I/O to the path with the smallest number of data blocks currently being processed. Use the LB value.

Set the default load-balancing policy by entering the following command at an elevated Windows PowerShell prompt:

Get-MSDSMGlobalDefaultLoadBalancePolicy -Policy PolicyValue

Here, PolicyValue is one of the accepted policy values—FOO, RR, LQD, LB, or NONE.

You set the timeout value for new disks by using Set-MPIOSetting. The basic syntax is:

Set-MPIOSetting -NewDiskTimeout NumSeconds

Here, NumSeconds is the number of seconds to wait before reaching the timeout.

Set-MPIOSetting accepts other parameters as well:

  • –PathVerifyEnabled. When set to –PathVerifyEnabled $true, path verification by MPIO is enabled on all paths according to –PathVerificationPeriod. By default, this feature is disabled.

  • –PathVerificationPeriod. When –PathVerifyEnabled is set to $true, this parameter sets the interval for path verification. For example, use –PathVerificationPeriod to verify MPIO on all paths every 60 seconds. The default value is every 30 seconds.

  • –PDORemovePeriod. Controls the amount of time (in seconds) that a multipath pseudo-LUN will remain in system memory even after losing all paths to the device. When the removal period is exceeded, all pending I/O operations are stopped and set as failed, and the failure is passed on to applications. The default value is 20 seconds.

  • –RetryCount. Controls the number of times a failed I/O is retried. The default value is 3.

  • –RetryInterval. Sets the number of seconds to wait before retrying a failed I/O. The default is 1 second.

Before you change MPIO settings, you should determine what the current settings are. You can do this by entering Get-MPIOSetting at the Windows PowerShell prompt.

Adding and removing multipath hardware devices

You manually add devices to MPIO by using the MPIO Properties dialog box, which is opened by selecting MPIO on the Tools menu in Server Manager. To configure a device manually to use multipath I/O, follow these steps:

  1. Open the MPIO Properties dialog box. On the MPIO Devices tab, you see a list of currently configured multipath devices. If the device you want to work with is not listed, tap or click Add.

  2. In the Add MPIO Support dialog box, type the vendor ID as an eight-character string followed by the product ID for the device as a 16-character string. Tap or click OK.

  3. You are prompted to restart the server to complete the operation. Tap or click Yes to restart the server.

At an elevated command prompt, you also can use Mpclaim to configure devices to use multipath I/O as well. The basic syntax for installing a device follows:

Mpclaim –r –i [-a | -c | -d DeviceId]

The –r parameter indicates that you want to restart the server to allow the device installation to be completed. Although you can suppress the restart by using the –n parameter instead of –r, the device will not be installed and available for use until you restart the server. Use the –a parameter to configure multipath I/O support for all compatible devices. Use the –c parameter to configure multipath I/O support for all SPC-3–compliant devices. Use the –d parameter followed by a device’s hardware ID to install a specific hardware device. The hardware ID of a device includes the vendor ID as an eight-character string followed by the product ID for the device as a 16-character string. In the following example, you install a device with EMSVendo0000234767834215 as the hardware ID:

Mpclaim –r –i –d EMSVendo0000234767834215

Alternatively, you can use Get-MSDSMSupportedHw to list available devices by their hardware ID and New-MSDSMSupportedHw to add a device to MPIO.

By using the MPIO Properties dialog box, you can remove a device from MPIO by following these steps:

  1. Open the MPIO Properties dialog box. The MPIO Devices tab shows a list of currently configured multipath devices.

  2. Select the device that should no longer use multiple path IO and then tap or click Remove.

  3. You are prompted to restart the server to complete the operation. Tap or click Yes to restart the server.

At an elevated command prompt, you can use Mpclaim to uninstall multipath I/O for a device as well. The basic syntax for installing a device follows:

Mpclaim –r –u [-a | -c | -d DeviceId]

Except for the –u parameter for uninstalling a device, the other parameters are the same as when you are installing MPIO for a device. The following example uninstalls the previously installed device:

Mpclaim –r –u –d EMSVendo0000234767834215

Alternatively, you can use Get-MSDSMSupportedHw to list available devices by their hardware ID and Remove-MSDSMSupportedHw to remove a device from MPIO.

Managing and maintaining MPIO

The MPIO Properties dialog box has several other tabs that you can use for general management of MPIO:

  • Discover Multi-Paths. When you select the Discover Multi-Paths tab, Windows runs a discovery algorithm to examine added device instances and determine whether multiple instances represent the same LUN through different paths. Available multipath devices are then listed by their hardware ID. The hardware ID combines a vendor’s name and a product string that matches a device ID that is maintained by MPIO. Tap or click Add to add hardware IDs for Fibre Channel devices that use Microsoft DSM.

  • DSM Install. Use the options on this tab to install DSMs provided by independent hardware vendors (IHVs). Keep in mind that many SPC-3–compliant storage arrays can use the Microsoft DSM, and you might not need to install an IHV DSM.

  • Configure Snapshot. Use the options on this tab to save the current MPIO configuration to a log file. Because the log includes details about the DSM, paths, and path states, you can use this information for troubleshooting.

You configure the load-balancing policy for LUNs by using their disk properties. In Computer Management, select Disk Management and then press and hold or right-click the disk you want to work with. In the Properties dialog box, click the MPIO tab. Use the Select MPIO Policy list to choose the load-balancing policy for the selected disk. If you use Failover Only as the load-balancing policy, you can configure a preferred path to the storage. This path is used for automatic failback.

Meeting performance, capacity, and availability requirements

Whether you are working with internal or external disks, you should follow the same basic principles to help ensure that the chosen storage solutions meet your performance, capacity, and availability requirements. Storage performance is primarily a factor of the disk’s access time (how long it takes to register a request and scan the disk), seek time (how long it takes to find the requested data), and transfer rate (how long it takes to read and write data). Storage capacity relates to how much information you can store on a volume or logical disk.

Although early NTFS implementations limited the maximum volume size and file size to 32 GBs, later implementations extended these limits. This means you can have a maximum NTFS volume size of 256 TBs minus 64 KBs when you are using 64 KB clusters, and 16 TBs minus 4 KBs when you are using 4 KB clusters. The maximum file size on an NTFS volume is 16 TBs minus 64 KBs. Further, a maximum of 4,294,967,294 files can be created on each volume, and a single server can manage hundreds of volumes (theoretically, around 2,000).

Storage availability relates to fault tolerance. You ensure availability for essential applications and services by using availability technologies. If a server has a problem or if a particular application or service fails, you have a way to continue operations by failing over to another server. In addition to clusters, you can help ensure availability by saving redundant copies of data, keeping spare parts, and, if possible, making standby servers available. At the disk and data levels, availability is enhanced by using redundant array of independent disks (RAID) technologies. RAID enables you to combine disks and improve fault tolerance.

RAID can be implemented in hardware or software. When hardware RAID controllers are installed on servers, the internal controller can be used to implement RAID on the server’s internal disks. When a server is allocated storage from a storage array, one or more logical unit numbers, or LUNs, are assigned. Each LUN is a virtual disk. Typically, hardware RAID configured within the storage array is used to spread the LUN across multiple physical disks (also called spindles).

Windows Server 2012 R2 supports several software RAID options, including traditional software-based RAID and Storage Spaces. Traditional software RAID is the software-based RAID technology built into the operating system and available in earlier releases of Windows. Storage Spaces provide resilient storage using new technologies and are preferred over traditional software RAID. However, each of these software-implemented RAID levels requires processing power and memory resources to maintain. By using hardware RAID, you use separate hardware controllers (RAID controllers) to maintain the disk arrays. Although this requires the purchase of additional hardware, it takes the burden off the server and can improve performance. Why? In a hardware-implemented RAID system, a server’s processing power and memory aren’t used to maintain the disk arrays. Instead, the hardware RAID controller (which is installed internally or in a storage array) handles all the necessary processing tasks.

The RAID levels available with a hardware implementation depend on the hardware controller /storage array and the vendor’s implementation of RAID technologies. Some hardware RAID configurations include RAID 0 (disk striping), RAID 1 (disk mirroring), RAID 0+1 (disk striping with mirroring), RAID 5 (disk striping with parity), and RAID 5+1 (disk striping with parity plus mirroring). Table 12-1 provides a summary of these RAID technologies. The table entries are organized by listing the highest RAID level to the lowest.

Table 12-1. Hardware RAID configurations for clusters

RAID Level

RAID Type

RAID Description

Advantages and Disadvantages

5+1

Disk striping with parity plus mirroring

Uses at least six volumes, with each one on a separate drive. Each volume is configured identically as a mirrored striped set with parity error checking.

Provides a high level of fault tolerance but has a lot of overhead.

5

Disk striping with parity

Uses at least three volumes, with each one on a separate drive. Each volume is configured as a striped set with parity error checking. In the case of failure, data can be recovered.

Provides fault tolerance with less overhead than mirroring. It has better read performance than disk mirroring.

1

Disk mirroring

Uses two volumes on two drives. The drives are configured identically, and data is written to both drives. If one drive fails, no data is lost because the other drive contains the data. This approach does not include disk striping.

Provides redundancy with better write performance than disk striping with parity.

0+1

Disk striping with mirroring

Uses two or more volumes, with each one on a separate drive. The volumes are striped and mirrored. Data is written sequentially to drives that are identically configured.

Provides redundancy with good read and write performance.

0

Disk striping

Uses two or more volumes, with each one on a separate drive. Volumes are configured as a striped set. Data is broken into blocks, called stripes, and then written sequentially to all drives in the striped set.

Provides speed and performance without data protection.

Configuring Hyper-V

The Microsoft virtualization technology is Hyper-V. Hyper-V is a virtual machine technology that enables multiple guest operating systems to run concurrently on one computer and provide separate applications and services to client computers. When you deploy Hyper-V, the Windows hypervisor acts as the virtual machine engine, providing the necessary layer of software for installing guest operating systems.

Understanding Hyper-V

Hyper-V can be installed only on computers with 64-bit processors that implement hardware-assisted virtualization and hardware-enforced data execution protection. Specifically, you must enable virtualization support in firmware and either Intel XD bit (execute disable bit) or AMD NX bit (no execute bit) as appropriate.

Virtualization can offer performance improvements, reduce the number of servers, and reduce the total cost of ownership (TCO). Although you can use both Windows 8.1 and Windows Server 2012 R2 to deploy virtualized computers, Hyper-V for Windows Server is very different from Client Hyper-V for Windows 8.1. The focus in this section is on Hyper-V for Windows Server 2012 R2.

Windows Server 2012 R2 supports AMD Virtualization (AMD-V) and Intel Virtualization Technology (Intel VT). AMD-V is included in second-generation and later AMD Opteron processors and other AMD processors. Third-generation AMD Opteron processors feature Rapid Virtualization Indexing (RVI) to accelerate the performance of virtualized applications. Intel VT is included in most current Intel Xeon processors in addition to Intel vPro and some other Intel processors. Keep in mind that older processors with virtualization might have different features from newer processors, and these differences can present special challenges when you are migrating from one hardware platform to another.

Important

Windows Server 2012 R2 also supports second-level address translation (SLAT) as implemented by Intel and AMD processors. SLAT adds a second level of paging below the architectural paging tables in the server’s processors. This improves performance by providing an indirection layer from virtual machine memory access to physical memory access. On Intel-based processors, this feature is called extended page tables (EPTs), and on AMD-based processors, this feature is called nested page tables (NPTs).

Windows Server 2012 R2 supports many virtualization features, including live migration and dynamic virtual machine storage. You can use live migration to move running virtual machines transparently either from one node of a cluster to another or from one nonclustered server to another. You also can perform multiple live migrations simultaneously. With dynamic virtual machine storage, you can add or remove virtual hard disks and physical disks while a virtual machine is running. You also can move the virtual disks of running virtual machines from one storage location to another without downtime.

Virtual machines also can be stored on SMB 3.0 file shares. Typically, you use this feature by creating the virtual machine and a virtual hard disk on the SMB 3.0 file share. Initially, the virtual machine will think it is using local storage. You then change the storage type by migrating the virtual machine storage from a local configuration to a file-share configuration. Hyper-V also supports connections to Fibre Channel storage, using virtual Fibre Channel.

Installing Hyper-V

Virtual machines require virtual networks to communicate with other computers. When you install Hyper-V, you can create one virtual network for each adapter available. After installing Hyper-V, you can create and manage virtual networks by using Virtual Network Manager. Microsoft recommends reserving one network adapter for remote access to the server. You do this by not designating the adapter for use with a virtual network.

You can install Hyper-V on a server with a virtualization-enabled processor by completing these steps:

  1. In Server Manager, tap or click Manage and then tap or click Add Roles And Features. If the wizard displays the Before You Begin page, read the Welcome text and then tap or click Next.

  2. On the Select Installation Type page, Role-Based Or Feature-Based Installation is selected by default. Tap or click Next.

  3. On the Select Destination Server page, select the server on which you want to install Hyper-V and then tap or click Next. Keep in mind that only servers running Windows Server 2012 R2 and that have been added for management in Server Manager are listed.

  4. On the Select Server Roles page, select Hyper-V as the role to install. If additional features are required to install a role, you see an additional dialog box. Tap or click Add Features to close the dialog box and add the required features to the server installation. When you are ready to continue, tap or click Next three times, skipping the Features page and the Hyper-V page.

  5. On the Create Virtual Switches page, shown in Figure 12-7, select a network adapter on which to create a virtual switch. A virtual switch is needed so that virtual machines can communicate with other computers. The virtual switch enables virtual machines to connect to the physical network. When you are ready to continue, tap or click Next.

    A screen shot of the Create Virtual Switches page, where you can select a network adapter on which to create a virtual switch. A virtual switch is needed so that virtual machines can communicate with other computers.

    Figure 12-7. Select the network adapter to use as a virtual switch.

  6. On the Virtual Machine Migration page, you can enable live migrations of virtual machines on this server by selecting the check box provided. You don’t have to enable this feature now; instead, you can enable this feature later by modifying the Hyper-V settings. However, if you enable live migrations, you also must choose the Credential Security Support Provider (CredSSP) protocol or Kerberos for authentication. Kerberos is the most secure, but you also must configure constrained delegation. CredSSP is less secure but doesn’t require you to configure constrained delegation. When you are ready to continue, tap or click Next.

  7. On the Default Stores page, you can accept the current default locations for virtual hard disk files and virtual machine configuration files or enter new default locations. Regardless of your choices, you can modify the defaults later, using the Hyper-V settings. When you are ready to continue, tap or click Next.

  8. On the Confirm Installation Selections page, tap or click the Export Configuration Settings link to generate an installation report that can be displayed in Internet Explorer. If the server on which you want to install Hyper-V doesn’t have all the required binary source files, the server gets the files from Windows Update by default or from a location specified in Group Policy. You also can specify an alternate path for the required source files. To do this, click the Specify An Alternate Source Path link, type the alternate path in the box provided, and then tap or click OK.

  9. Because a restart is required to complete the installation of Hyper-V, you might want to select the Restart The Destination Server check box. Tap or click Install to begin the installation process. The Installation Progress page tracks the progress of the installation. If you close the wizard, tap or click the Notifications icon in Server Manager and then tap or click the link provided to reopen the wizard.

  10. When Setup finishes installing Hyper-V, the Installation Progress page is updated to reflect this. Review the installation details to ensure that all phases of the installation were completed successfully. If you didn’t restart the server, a restart will be pending and required to complete the installation.

Creating virtual machines

Installing Hyper-V on a server establishes the server as a virtualization server. Each virtual machine you install on the server must be assigned resources to use and then be configured. The number of virtual machines you can run on any individual server depends on the server’s hardware configuration and workload. During setup, you specify the amount of memory available to a virtual machine. Although you can change that memory allocation, the amount of memory actively allocated to a virtual machine cannot be used in other ways.

You create and manage virtual machines by using Hyper-V Manager, shown in Figure 12-8. Start Hyper-V Manager by selecting Hyper-V Manager on the Tools menu in Server Manager.

A screen shot of Hyper-V Manager, where you can install and manage virtual machines.

Figure 12-8. Use Hyper-V Manager to install and manage virtual machines.

To install and configure a virtual machine, complete the following steps:

  1. In Hyper-V Manager, press and hold or right-click the server node in the left pane, point to New, and then select Virtual Machine. This starts the New Virtual Machine Wizard.

  2. Tap or click Next to display the Specify Name And Location page, shown in Figure 12-9. In the Name text box, enter a name for the virtual machine, such as AppServer02.

    A screen shot of the Specify Name And Location page, where you can set the name of the virtual machine as well as the storage location.

    Figure 12-9. Set the name for the virtual machine and, optionally, its storage location.

  3. By default, the virtual machine data is stored in the default location for the server. To select a different location, select the Store The Virtual Machine In A Different Location check box, tap or click Browse, and then use the Select Folder dialog box to select a save location.

  4. Tap or click Next. On the Specify Generation page, specify whether you want to create a Generation 1 or Generation 2 virtual machine. Use Generation 1 if you plan to deploy non-Windows operating systems and versions of Windows prior to Windows 8 or Windows Server 2012. Use Generation 2 if you plan to deploy Windows Server 2012, Windows Server 2012 R2, 64-bit versions of Windows 8, or 64-bit versions of Windows 8.1.

    Note

    Generation 1 provides the same support as previous versions of Hyper-V. Generation 2 supports secure boot, boot from a SCSI virtual hard disk, boot from a SCSI virtual DVD, PXE boot by using a standard network adapter, and Unified Extensible Firmware Interface (UEFI) firmware.

  5. Tap or click Next. On the Assign Memory page, specify the amount of memory to allocate to the virtual machine. In most cases, you should reserve at least the minimum amount of memory recommended for the operating system you plan to install. You might also want to enable dynamic memory allocation.

  6. Tap or click Next. On the Configure Networking page, use the Connection list to select a network adapter to use. Each new virtual machine includes a network adapter, and you can configure the adapter to use an available virtual switch for communicating with other computers.

  7. Tap or click Next. On the Connect Virtual Hard Disk page, use the options provided to name and set the location of a virtual hard disk for the virtual machine. Each virtual machine requires a virtual hard disk so that you can install an operating system and required applications.

  8. Tap or click Next. On the Installation Options page, select Install An Operating System From A Boot CD/DVD-ROM. If you have physical distribution media, insert the distribution media and then specify the CD/DVD drive to use. If you want to install from an .iso image, select Image File, tap or click Browse, and then use the Open dialog box to select the image file to use.

  9. Tap or click Next and then tap or click Finish.

  10. In Hyper-V Manager, press and hold or right-click the name of the virtual machine and then tap or click Connect.

  11. In the Virtual Machine Connection window, tap or click Start. After the virtual machine is initialized, the operating system installation should start automatically. Continue with the operating system installation as you normally would.

When the installation is complete, log on to the virtual machine and configure it as you would any other server. From then on, you manage the virtual machine much as you would any other computer except that you can externally control its state, available resources, and hardware devices by using Hyper-V Manager. In addition, when it comes to backups, several approaches are available:

  • Back up the host server and all virtual machine data.

  • Back up the host server and only the configuration data for virtual machines.

  • Log on to virtual machines and perform normal backups as you would with any other server.

  • Use Hyper-V manager to create point-in-time snapshots of virtual machines.

Ideally, you should use a combination of these approaches to ensure that your host server and virtual machines are protected. In some cases, you might want to back up the host server and configuration data and then log on to each virtual machine and use normal backups. Other times, you might want to back up the host machine and all virtual machine data. You will likely want to supplement your backup strategy by creating point-in-time snapshots of virtual machines.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset