NFS on Solaris

NFS was developed by Sun Microsystems, and it has been ported to most popular OSes. The implementation of NFS is large, and it varies from system to system. As the NFS service evolved, it went through a few different versions. Therefore, if you are using NFS to connect to another system, you need to be aware of the different versions of NFS.

NFS version 2 was the first version of the NFS protocol in wide use. It continues to be available on a large variety of platforms. SunOS releases earlier than Solaris 2.5 support version 2 of the NFS protocol. It should be noted that NFS 2 suffers many shortcomings. For example, UNIX-based servers are now moving to faster 64-bit implementations, and the 8KB data packet size used by NFS version 2 is a bottleneck for transferring data. Sun, Digital, IBM, Hewlett-Packard, and Data General toiled with these and other problems. Together, they released NFS 3 in 1995 as RFC 1813. The only time you should have to deal with version 2 is if you are dealing with an older version of an operating system, such as Solaris 2.4 or HP-UX version 10.

NFS 3 was introduced with Solaris 2.5. Several changes have been made to improve interoperability and performance, including the following enhancements over version 2:

  • NFS 3 enables safe asynchronous writes onto the server, which improves performance by allowing the server to cache client write requests in memory. The client does not need to wait for the server to commit the changes to disk; therefore, the response time is faster.

  • The server can batch requests, which improves the response time on the server.

  • All NFS operations return the file attributes, which are stored in the local cache. Because the cache is updated more often, the need to do a separate operation to update this data arises less often. Therefore, the number of remote procedure calls to the server is reduced, improving performance.

  • The process for verifying file access permissions has been improved. In particular, version 2 would generate a message reporting a “write error” or a “read error” if users tried to copy a remote file to which they did not have permission. In version 3, the permissions are checked before the file is opened, so the error is reported as an “open error.”

  • Version 3 removes the 8KB transfer size limit and lets the client and server negotiate a maximum transfer size.

  • Access control list (ACL) support was added to the version 3 release. ACLs, described in Chapter 16, “System Security,” provide a finer-grained mechanism to set file access permissions than is available through standard UNIX file permissions.

  • The default transport protocol for the NFS protocol was changed from User Datagram Protocol (UDP) to Transport Control Protocol (TCP), which helps performance on slow networks and wide area networks (WANs). UDP was preferred initially because it performed well on local area networks (LANs) and was faster than TCP. Although UDP benefited from the high bandwidth and low latency typical of LANs, it performed poorly when subjected to the low bandwidth and high latency of WANs, such as the Internet. In recent years, improvements in hardware and TCP implementations have narrowed this advantage enough that TCP implementations can now outperform UDP. A growing number of NFS implementations now support TCP. Unlike UDP, TCP provides congestion control and error recovery.

  • Version 3 improved the network lock manager, which provides UNIX record locking and PC file sharing for NFS files. The locking mechanism is now more reliable for NFS files; therefore, commands such as ksh and mail, which use locking, are less likely to hang.

It should be noted that to take advantage of these improvements, the version 3 protocol must be running on both the NFS server and the NFS clients.

With Solaris 2.6, the NFS 3 protocol went through still more enhancements:

  • Correct manipulation of files larger than 2GB, which was not formerly possible.

  • A 32KB default transfer size. The larger transfer size reduces the number of NFS requests required to move a given quantity of data, providing better use of network bandwidth and I/O resources on clients and servers. If the server supports it, a client can issue a read request that downloads a file in a single operation.

  • Dynamic failover of read-only file systems. This provides a high level of availability. With failover, multiple replicas are specified in case an NFS server goes down, and another mount point on an alternative server can be specified.

  • The capability of WebNFS to make a file system on the Internet accessible through firewalls using an extension to the NFS protocol. WebNFS provides greater throughput under a heavy load than Hypertext Transfer Protocol (HTTP) access to a web server. In addition, it provides the capability to share files over the Internet without the administrative overhead of an anonymous File Transfer Protocol (FTP) site. WebNFS is described later in this chapter.

  • NFS server logging, which allows an NFS server to provide a record of file operations performed on its file systems. The record includes information to track what is accessed, when it is accessed, and who accessed it.You can specify the location of the logs that contain this information through a set of configuration options. This feature is particularly useful for sites that make anonymous FTP archives available to NFS and WebNFS clients.

Note

When using NFS, make sure that the systems you’ll be connecting to are all at the same version of NFS. You might experience problems if your system is at NFS version 2 and the system to which you are trying to connect is at NFS 3.


NFS Daemons

NFS uses a number of daemons to handle its services. These services are initialized at startup from the /etc/init.d/nfs.server and /etc/init.d/nfs.clients startup scripts. The most important NFS daemons are outlined in Table 22.1.

Table 22.1. NFS Daemons
Daemon Description
nfsd Handles file system exporting and file access requests from remote systems. An NFS server runs multiple instances of this daemon. This daemon is usually invoked at run level 3 and is started by the /etc/init.d/nfs.server startup script.
mountd Handles mount requests from NFS clients. This daemon also provides information about which file systems are mounted by which clients. Use the showmount command, described later in this chapter, to view this information. This daemon is usually invoked at run level 3 and is started by the /etc/init.d/nfs.server startup script.
lockd Runs on the NFS server and NFS client and provides file-locking services in NFS. This daemon is started by the /etc/init.d/nfs.client script at run level 2.
statd Runs on the NFS server and NFS client and interacts with lockd to provide the crash and recovery functions for the locking services on NFS. This daemon is started by the /etc/init.d/nfs.client script at run level 2.
rpcbind Facilitates the initial connection between the client and the server.
nfslogd Provides operational logging to the Solaris NFS server. nfslogd is described later in this chapter.

Setting Up NFS

Servers let other systems access their file systems by sharing them over the NFS environment. A shared file system is referred to as a shared resource. You specify which file systems are to be shared by entering the information in a file called /etc/dfs/dfstab. Entries in this file are shared automatically whenever you start the NFS server operation. You should set up automatic sharing if you need to share the same set of file systems on a regular basis. Most file system sharing should be done automatically; the only time manual sharing should occur is during testing or troubleshooting.

The / etc/dfs/dfstab file lists all the file systems that your NFS server shares with its NFS clients. It also controls which clients can mount a file system. If you want to modify /etc/dfs/dfstab to add or delete a file system or to modify the way sharing is done, edit the file with a text editor, such as vi or textedit. The next time the computer enters run level 3, the system reads the updated /etc/dfs/dfstab to determine which file systems should be shared automatically.

Each line in the dfstab file consists of a share command, as shown in the following example:

more /etc/dfs/dfstab 

The system displays the contents of /etc/dfs/dfstab:

#       Place share(1M) commands here for automatic execution 
#       on entering init state 3. 
# 
#       Issue the command '/etc/init.d/nfs.server start' to run the NFS 
#       daemon processes and the share commands, after adding the very 
#       first entry to this file. 
# 
#       share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource] 
#       .e.g, 
#       share  -F nfs  -o rw=engineering  -d "home dirs"  /export/home2 
share -F nfs /export 
share -F nfs /cdrom/solaris_srvr_intranet_ext_1_0

The /usr/sbin/share command exports a resource or makes a resource available for mounting. If it is invoked with no arguments, share displays all shared file systems. The share command, described in Table 22.2, can be run at the command line to achieve the same results as the /etc/dfs/dfstab file, but use this method only when testing.

The syntax for the share command is shown here:

share -F <FSType> -o <options> -d <description> <pathname> 

<pathname> is the name of the file system to be shared.

Table 22.2. The share Command
Option Description
-F <FSType> Specify the file system type, such as nfs. If the -F option is omitted, the first file system type listed in /etc/dfs/fstypes is used as the default.
-o <options> Select from the following options:

rw pathname is shared read/write to all clients. This is also the default behavior.

rw=client[:client]... pathname is shared read/write, but only to the listed clients. No other systems can access pathname.

ro pathname is shared read-only to all clients.

ro=client[:client]... pathname is shared read-only, but only to the listed clients. No other systems can access pathname.

aclok Allows the NFS server to do access control for NFS version 2 clients (running Solaris 2.4 or earlier). When aclok is set on the server, maximum access is given to all clients. For example, with aclok set, if anyone has read permissions, everyone does. If aclok is not set, minimal access is given to all clients.

anon=<uid> Sets uid to be the effective user ID of unknown users. By default, unknown users are given the effective uid of nobody. If uid is set to -1, access is denied.

index=<file> Loads a file rather than a listing of the directory containing this specific file when the directory is referenced by an NFS uniform resource locator (URL). See the section “WebNFS” later in this chapter.

nosub Prevents clients from mounting subdirectories of shared directories.

nosuid The server file system silently ignores any attempt to enable the setuid or setgid mode bits. By default, clients can create files on the shared file system if the setuid or setgid mode is enabled. See Chapter 16 for a description of setuid and setgid.

public Enables NFS browsing of the file system by a WebNFS-enabled browser. Only one file system per server can use this option. The -ro=list and -rw=list options can be included with this option.

root=host[: host]... Only root users from the specified hosts have root access. By default, no host has root access, so root users are mapped to an anonymous user ID (see the previous description of the anon=<uid> option).

sec=<mode> Uses one or more of the security modes specified by <mode> to authenticate clients. The <mode> option establishes the security mode of NFS servers. If the NFS connection uses the NFS version 3 protocol, the NFS clients must query the server for the appropriate <mode> to use. If the NFS connection uses the NFS version 2 protocol, the NFS client uses the default security mode, which is currently sys. NFS clients may force the use of a specific security mode by specifying the sec=<mode> option on the command line. However, if the file system on the server is not shared with that security mode, the client may be denied access.Valid modes are as follows:

sys Use AUTH_SYS authentication. The user’s UNIX user ID and group IDs are passed in clear on the network, unauthenticated by the NFS server.

dh Use a Diffie-Hellman public key system.

krb5 Use the Kerberos V5 protocol to authenticate users before granting access to the shared file system.

krb5i Use Kerberos V5 authentication with integrity checking. krb5i uses checksums to verify that the data has not been modified or tempered with.

krb5p Use Kerberos V5 authentication, integrity checksums, and privacy protection (encryption) on the shared file system. This option provides the most secure file system sharing because it encrypts all traffic. Using this option could degrade performance on some systems, however, depending on the intensity of the encryption algorithm and the amount of data that is being transferred.

none Use null authentication.

log=<tag> Enables NFS server logging for the specified file system. The optional <tag> determines the location of the related log files. The tag is defined in /etc/nfs/nfslog.conf. If no tag is specified, the default values associated with the global tag in /etc/nfs/ nfslog.conf are used. NFS logging is described later in this chapter.
-d <description> Provides a description of the resource being shared.

To share a file system as read-only every time the system is started up, add this line to the / etc/dfs/dfstab file:

share -F nfs -o ro /data 

After editing the /etc/dfs/dfstab file, restart the NFS server by either restarting it or typing this:

/etc/init.d/nfs.server start 

You need to start the nfs.server script only after you make the first entry in the / etc/dfs/dfstab file. This is because, at startup, when the system enters run level 3, mountd and nfsd are not started if the /etc/dfs/dfstab file is empty. After you have made an initial entry and have executed the nfs.server script, you can modify /etc/dfs/dfstab without restarting the daemons. You simply execute the shareall command, and any new entries in the /etc/dfs/ dfstab file are shared.

Note

Even if you share a file system from the command line by typing the share command, mountd and nfsd won’t run until you make an entry in /etc/dfs/dfstab and run the nfs.server script.


After you have at least one entry in the /etc/dfs/dfstab file and after both mountd and nfsd are running, you can share additional file systems by typing the share command directly from the command line. Be aware, however, that if you don’t add the entry to the /etc/dfs/dfstab file, the file system is not automatically shared the next time the system is restarted.

The dfshares command displays information about the shared resources available to the host from an NFS server. Here is the syntax for dfshares:

dfshares <servername> 

You can view the shared file systems on a remote NFS server by using the dfshares command, as follows:

dfshares apollo 

If no servername is specified, all resources currently being shared on the local host are displayed. Another place to find information on shared resources is in the server’s /etc/dfs/sharetab file. This file contains a list of the resources currently being shared.

NFS Security

With NFS, you need to be concerned about security. When you issue the share command, any system can access the file system through your network. It’s a good idea to be more specific about who can mount the file system. The following examples illustrate how to set up a share with restrictions on which hosts can mount the shared resource:

share -F nfs -o ro=apollo:neptune:zeus /data 

The file system named /data is shared read-only to the listed clients only. No other systems can access /data. Another method is to share a file system readonly to some hosts and read/write to others. Use the following command to accomplish this:

share -F nfs -o ro=apollo rw=neptune:zeus /data 

In this example, apollo has read-only access, and neptune and zeus have read/write access.

The next example specifies that root access be granted to the client named zeus. A root user coming from any other system is recognized only as nobody and has limited access rights:

share -F nfs -o root=zeus /data 

Caution

Root permissions should not be enabled on an NFS file system. Administrators might find this annoying if they’re trying to modify a file through an NFS mount, but disastrous mistakes can be eliminated. For example, if a root user wants to purge a file system called /data on one host, an rm -rf * would be disastrous if there is an NFS mounted file system with root permission mounted under /data. If /data/thor is a mounted file system under /data, the files located on the NFS server would be wiped out.


To remove a shared file system, issue the unshare command on the server, as follows:

unshare /data 

The /data file system is no longer shared. You can verify this by issuing the share command with no options:

share 

The system responds with this:

/home   ro,anon=0   "" 

Only the file system named /home is returned as shared.

Note

If share commands are invoked multiple times on the same file system, the last share invocation supersedes the previous ones. The options set by the last share command replace the old options.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset