Chapter 19. Storage Management

This chapter introduces Microsoft Windows Server 2003 storage management. Data is stored throughout the enterprise on a variety of systems and storage devices, the most common of which are hard disk drives but also can include storage management devices and removable media devices. Managing and maintaining the myriad of systems and storage devices is the responsibility of administrators. If a storage device fails, runs out of space, or encounters other problems, serious negative consequences can result. Servers could crash, applications could stop working, users could lose data, all of which affect the productivity of users and the organization's bottom line. You can help prevent such problems and losses by implementing sound storage management procedures that allow you to evaluate your current and future storage needs and also help you meet current and future performance, capacity, and availability requirements. You then must configure storage appropriately for the requirements you've defined.

Essential Storage Technologies

One of the few constants in Microsoft Windows operating system administration is that data storage needs are ever increasing. It seems that only a few years ago a 120-gigabyte (GB) hard disk was huge and something primarily reserved for Windows servers rather than Windows workstations. Now Windows workstations ship with 120-GB hard disks as standard equipment, and some even ship with striped drives that allow workstations to have a single large volume that spans over several drives—and all of that must be stored somewhere other than on the workstations, which has meant that back-end storage solutions have had to scale dramatically as well. Server solutions that were once used for enterprise-wide implementations are now being used increasingly at the departmental level, and the underlying architecture for the related storage solutions has had to change dramatically to keep up.

Using Internal and External Storage Devices

To help meet the increasing demand for data storage and changing requirements, servers are being deployed with a mix of internal and external storage. In internal storage configurations, drives are connected inside the server chassis to a local disk controller and are said to be directly attached. You'll sometimes see an internal storage device referred to as directattached storage (DAS).

In external storage configurations, servers connect to external, separately managed collections of storage devices that are either network-attached or part of a storage area network. Although the terms network-attached storage (NAS) and storage area network (SAN) are sometimes used as if they are one and the same, the technologies differ in how servers communicate with the external drives.

NAS devices are connected through a regular Transmission Control Protocol/Internet Protocol (TCP/IP) network. All server-storage communications go across the organization's local area network (LAN), as shown in Figure 19-1. This means the available bandwidth on the network can be shared by clients, servers, and NAS devices. For best performance, the network should be running at 100 megabits per second (Mbps) or 1000 Mbps. Networks operating at slower speeds can experience a serious performance impact as clients, services, and storage devices try to communicate using the limited bandwidth.

In a NAS, server-storage communications are on the LAN

Figure 19-1. In a NAS, server-storage communications are on the LAN

A SAN is physically separate from the LAN and is independently managed. As shown in Figure 19-2, this isolates the server-to-storage communications so that traffic doesn't impact communications between clients and servers. Several SAN technologies are implemented, including fibre channel, a more traditional SAN technology that delivers high reliability and performance, and Internet SCSI (iSCSI), a newer SAN technology that delivers good reliability and performance at a lower cost than fibre channel. As the name implies, Internet SCSI uses TCP/IP networking technologies on the SAN, allowing servers to communicate with storage devices using the IP protocol. The SAN is still isolated from the organization's LAN.

In a SAN, server-storage communications don't affect communications between clients and servers

Figure 19-2. In a SAN, server-storage communications don't affect communications between clients and servers

Improving Storage Management

Because of the increasing use of SANs, Windows Server 2003 includes many new and enhanced features for working with SANs and handling storage management in general. These improvements include the following:

  • Improving Storage Management

    Volume Shadow Copy service (VSS) VSS allows administrators to create point-intime copies of volumes and individual files called snapshots. This makes it possible to back up these items while files are open and applications are running and to restore them to a specific point in time. VSS also makes it possible to create point-in-time copies of documents on shared folders called shadow copies.

    Note

    Users can recover their own files when VSS is enabled. Once you configure shadow copy, point-in-time backups of documents contained in the designated shared folders are created automatically, and users can quickly recover files that have been deleted or unintentionally altered as long as the Shadow Copy Client has been installed on their computer. For more information about VSS and the Shadow Copy Client, see Chapter 22.

  • Note

    Virtual Disk Service (VDS) VDS makes it possible for storage devices from multiple vendors to interoperate. To do this, VDS provides application programming interfaces (APIs) that management tools and storage hardware can use, allowing for a unified interface for managing storage devices from multiple vendors and making it easier for administrators to manage a mixed-storage environment.

  • Note

    Volume automounting Volume automounting makes it possible to manage better the way volumes are mounted. By using the MOUNTVOL command, administrators can turn off volume automounting. By using volume mount points, administrators can mount volumes to empty NTFS folders, giving the volumes a drive path rather than a drive letter. This means it is easier to mount and unmount volumes, particularly with SANs.

  • Note

    Multipath I/O Multipath I/O makes it possible to configure as many as 32 separate physical paths to external storage devices that can be used simultaneously and load balanced if necessary. The purpose of having multiple paths is to have redundancy and possibly increased throughput. If you have multiple host bus adapters as well, you improve the chances of recovery from a path failure. However, if a path failure occurs, there might be a short period of time when the drives on the SAN aren't accessible.

  • Note

    Distributed File System (DFS) DFS makes it possible to create a single directory tree that includes multiple file servers and their file shares. The DFS tree can contain more than 5000 shared folders in a domain environment (or 50,000 shared folders on a stand-alone server), located on different servers, allowing users to find files or folders distributed across the enterprise easily. DFS directory trees can also be published in the Active Directory directory service so that they are easy to search.

    Tip

    Tip

    DFS now supports multiple roots and closest-site selection

    New for Windows Server 2003 is the capability for a single server to host multiple DFS roots and use closest-site selection. The capability to host multiple DFS roots allows you to consolidate and reduce the number of servers needed to maintain DFS. By using closestsite selection, DFS uses Active Directory site metrics to route a client to the closest available file server.

  • File Replication Services (FRS) FRS makes it possible to synchronize data across the enterprise and is in fact the synchronization technology used by Active Directory. FRS works in conjunction with DFS to replicate data on file shares and automatically maintain synchronization of copies on multiple servers. New for Windows Server 2003 is the DFS Microsoft Management Console (MMC), which allows administrators to configure replication. FRS is now capable of compressing replication traffic as well.

Windows Server 2003 adds several command-line tools for managing local storage. These tools include the following:

  • DiskPart Used to manage disks, partitions, and volumes. It is the command-line counterpart to the Disk Management tool and also includes features not found in the graphical user interface (GUI) tool, such as the capability to extend partitions on basic disks.

  • Dfsutil Used to configure DFS, back up and restore DFS directory trees (namespaces), copy directory trees, and troubleshoot DFS. This tool is in the Windows Server 2003 Support Tools.

  • Fsutil Used to get detailed drive information and perform advanced file system maintenance. You can manage sparse files, reparse points, disk quotas, and other advanced features of NTFS.

  • Health_Chk Used to monitor or troubleshoot FRS. It works in conjunction with a number of other utilities, using them to retrieve and log the data necessary for troubleshooting. This tool is part of the Windows Server 2003 Support Tools.

  • Vssadmin Used to view and manage the Volume Shadow Copy service and its configuration.

Booting from SANs and Using SANs with Clusters

Windows Server 2003 supports booting from a SAN, having multiple clusters attached to the same SAN, and having a mix of clusters and stand-alone servers attached to the same SAN. To boot from a SAN, the external storage devices and the host bus adapters of each server must be configured appropriately to allow booting from the SAN.

When multiple servers must boot from the same external storage device, either the SAN must be configured in a switched environment or it must be directly attached from each host to one of the storage subsystem's fibre channel ports. A switched or direct-to-port environment allows the servers to be separate from each other, which is essential for booting from a SAN.

Tip

Fibre Channel–Arbitrated Loop isn't allowed

The use of a Fibre Channel–Arbitrated Loop (FC-AL) configuration is not supported because hubs typically don't allow the servers on the SAN to be isolated properly from each other— and the same is true when you have multiple clusters attached to the same SAN or a mix of clusters and stand-alone servers attached to the same SAN.

Each server on the SAN must have exclusive access to the logical disk from which it is booting, and no other server on the SAN should be able to detect or access that logical disk. For multiple-cluster installations, the SAN must be configured so that a set of cluster disks is accessible only by one cluster and is completely hidden from the rest of the clusters. By default, Windows Server 2003 will attach and mount every logical disk that it detects when the host bus adapter driver loads, and if multiple servers mount the same disk, the file system can be damaged.

To prevent file system damage, the SAN must be configured in such a way that only one server can access a particular logical disk at a time. You can configure disks for exclusive access using a type of logical unit number (LUN) management such as LUN masking, LUN zoning, or a preferred combination of these techniques. Keep in mind that LUN management isn't normally configured within Windows. It is instead configured at the level of the switch, storage subsystem, or host bus adapter.

Meeting Performance, Capacity, and Availability Requirements

Whether you are working with internal or external disks, you should follow the same basic principles to help ensure the chosen storage solutions meet your performance, capacity, and availability requirements. Storage performance is primarily a factor of the disk's access time (how long it takes to register a request and scan the disk), seek time (how long it takes to find the requested data), and transfer rate (how long it takes to read and write data). Storage capacity relates to how much information you can store on a volume or logical disk.

Although NTFS as implemented on Microsoft Windows NT 4 (NTFS 4) had a maximum volume size and file size limit of 32 GB, NTFS 5 as introduced with Windows NT 4 Service Pack 4 and Microsoft Windows 2000 Server extended this limit to 2 terabytes (TB). In Windows Server 2003, you have greatly extended limits. You can have a maximum NTFS volume size of 256 TB minus 64 KB using 64-KB clusters and 16 TB minus 4 KB using 4-KB clusters. Windows Server 2003 has a maximum file size on an NTFS volume of up to 16 TB minus 64 KB. Further, Windows Server 2003 supports a maximum of 4,294,967,294 files on each volume, and a single server can manage hundreds of volumes (theoretically, around 2000).

Storage availability relates to fault tolerance. As discussed in Chapter 18, you ensure availability for essential applications and services by using cluster technologies. If a server has a problem or a particular application or service fails, you have a way to continue operations by failing over to another server. In addition to clusters, you can help ensure availability by saving redundant copies of data, keeping spare parts, and if possible making standby servers available. At the disk and data level, availability is enhanced by using redundant array of independent disks (RAID) technologies. RAID allows you to combine disks and to improve fault tolerance.

RAID can be implemented in software or in hardware. By using software RAID, the operating system maintains the disk sets at some cost to server performance. Windows Server 2003 supports RAID 0 (disk striping), RAID 1 (disk mirroring), and RAID 5 (disk striping with parity). Each of these software-implemented RAID levels requires processing power and memory resources to maintain. By using hardware RAID, you use separate hardware controllers (RAID controllers) to maintain the disk arrays. Although this requires the purchase of additional hardware, it takes the burden off the server and can improve performance. Why? In a hardware-implemented RAID system, processing power and memory aren't used to maintain the disk arrays. Instead, the hardware RAID controller handles all the necessary processing tasks. Some hardware RAID controllers have integrated disk caching as well, which can give a further boost to overall RAID performance.

The RAID levels available with a hardware implementation depend on the hardware controller and the vendor's implementation of RAID technologies. Some hardware RAID configurations include RAID 0 (disk striping), RAID 1 (disk mirroring), RAID 0+1(disk striping with mirroring), RAID 5 (disk striping with parity), and RAID 5+1 (disk striping with parity plus mirroring).

Note

For more information about the advantages and disadvantages of various RAID levels, see Table 18-1.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset