[prev in list] [next in list] [prev in thread] [next in thread]
List: linux-nfs
Subject: [NFS] Another HOWTO
From: Tavis Barr <tavis () mahler ! econ ! columbia ! edu>
Date: 2000-12-22 13:31:46
[Download RAW message or body]
OK, here's the HOWTO after another round of corrections and additions,
and, no doubt, a few mistakes introduced by the process. I'm not going
to call this a final draft, since a HOWTO is never finished, but I do
hope it's ready to be marked up and posted and sent off to LDP.
Comments very welcome.
Cheers,
Tavis
LINUX-NFS HOWTO
[DRAFT UPDATE]
[Updated to reflect 2.2.18+ and 2.4 kernels]
[December 22, 2000]
by Tavis Barr, Nicolai Langfeldt, and Seth Vidal
TABLE OF CONTENTS:
CHAPTER 1. PREAMBLE
CHAPTER 2. INTRODUCTION
CHAPTER 3. SETTING UP AN NFS SERVER
CHAPTER 4. SETTING UP AN NFS CLIENT
CHAPTER 5. OPTIMIZING NFS PERFORMANCE
CHAPTER 6. SECURITY AND NFS
CHAPTER 7. TROUBLESHOOTING
CHAPTER 8. USING LINUX NFS WITH OTHER OSes
1. PREAMBLE
1.1. Legal stuff
(C)opyright 2000 Tavis Barr, Nicolai Langfeldt, and Seth Vidal.
Do not modify without appending copyright, distribute freely but
retain this paragraph.
1.2. Disclaimer
This document is provided without any guarantees, including
merchantability or fitness for a particular use. The maintainers
cannot be responsible if following instructions in this document
leads to damaged equipment or data, angry neighbors, strange habits,
divorce, or any other calamity.
1.3. Feedback
This will never be a finished document; we welcome feedback about
how it can be improved. As of October 2000, the Linux NFS home
page is being hosted at http://nfs.sourceforge.net. Check there
for mailing lists, bug fixes, and updates, and also to verify
who currently maintains this document.
1.4. Translation
If you are able to translate this document into another language,
we would be grateful and we will also do our best to assist you.
Please notify the maintainers.
1.5. Dedication
NFS on Linux was made possible by a collaborative effort of many
people, but a few stand out for special recognition. The original
version was developed by Olaf Kirch and Alan Cox. The version 3
server code was solidified by Neil Brown, based on work from
Saadia Khan, James Yarbrough, Allen Morris, H.J. Lu, and others
(including himself). The client code was written by Olaf Kirch and
updated by Trond Myklebust. The version 4 lock manager was developed
by Saadia Khan. Dave Higgen and H.J. Lu both have undertaken the
thankless job of extensive maintenance and bug fixes to get the
code to actually work the way it was supposed to. H.J. has also
done extensive development of the nfs-utils package. Of course this
dedication is leaving many people out.
The original version of this document was developed by Nicolai
Langfeldt. It was heavily rewritten in 2000 by Tavis Barr
and Seth Vidal to reflect substantial changes in the workings
of NFS for Linux developed between the 2.0 and 2.4 kernels.
Thomas Emmel, Neil Brown, Trond Myklebust, Erez Zadok, and Ion Badulescu
also provided valuable comments and contributions.
2. INTRODUCTION
2.1 What is NFS?
The Network File System (NFS) was developed to allow machines
to mount a disk partition on a remote machine as if it were on
a local hard drive. This allows for fast, seamless sharing of
files across a network.
It also gives the potential for unwanted people to access your
hard drive over the network (and thereby possibly read your email
and delete all your files as well as break into your system) if
you set it up incorrectly. So please read the Security section of
this document carefully if you intend to implement an NFS setup.
There are other systems that provide similar functionality to NFS.
Samba provides file services to Windows clients. The Andrew File
System from IBM (http://www.transarc.com/Product/EFS/AFS/index.html),
recently open-sourced, provides a file sharing mechanism with some
additional security and performance features. The Coda File System
(http://www.coda.cs.cmu.edu/) is still in development as of this writing
but is designed to work well with disconnected clients. Many of the
features of the Andrew and Coda file systems are slated for inclusion
in the next version of NFS (Version 4) (http://www.nfsv4.org). The
advantage of NFS today is that it is mature, standard, well understood,
and supported robustly across a variety of platforms.
2.2 What is this HOWTO and what is it not?
This HOWTO is intended as a complete, step-by-step guide to setting
up NFS correctly and effectively. Setting up NFS involves two steps,
namely configuring the server and then configuring the client. Each
of these steps is dealt with in order. The document then offers
some tips for people with particular needs and hardware setups, as
well as security and troubleshooting advice.
This HOWTO is not a description of the guts and
underlying structure of NFS. For that you may wish to read
_Managing NFS and NIS_ by Hal Stern, published by O'Reilly &
Associates, Inc. While that book is severely out of date, much
of the structure of NFS has not changed, and the book describes it
very articulately. A much more advanced and up-to-date technical
description of NFS is available in _NFS Illustrated_ by Brent Callaghan.
This document is also not intended as a complete reference manual,
and does not contain an exhaustive list of the features of Linux
NFS. For that, you can look at the man pages for nfs(5), exports(5),
mount(8), fstab(5), nfsd(8), lockd(8), statd(8), rquotad(8), and
mountd(8).
It will also not cover PC-NFS, which is considered obsolete (users
are encouraged to use Samba to share files with PCs) or NFS
Version 4, which is still in development.
2.3 Knowledge Pre-Requisites
You should know some basic things about TCP/IP networking before
reading this HOWTO; if you are in doubt, read the Networking-
Overview-HOWTO.
2.4 Software Pre-Requisites: Kernel Version and nfs-utils
The difference between Version 2 NFS and version 3 NFS will be
explained later on; for now, you might simply take the suggestion
that you will need NFS Version 3 if you are installing a dedicated
or high-volume file server. NFS Version 2 should be fine for
casual use.
NFS Version 2 has been around for quite some time now (at least
since the 1.2 kernel series), however you will need a kernel version
of at least 2.2.18 if you wish to do any of the following:
* Mix Linux NFS with other operating systems' NFS
* Use file locking reliably over NFS
* Use NFS Version 3.
There are also patches available for kernel versions above 2.2.14
that provide the above functionality. Some of them can be downloaded
from the Linux NFS homepage. If your kernel version is 2.2.14-
2.2.17 and you have the source code on hand, you can tell if these
patches have been added because NFS Version 3 server support will be
a configuration option. However, unless you have some particular
reason to use an older kernel, you should upgrade because many bugs
have been fixed along the way.
Version 3 functionality will also require the nfs-utils package of
at least version 0.2.1, and mount version 2.10m or newer. However
because nfs-utils and mount are fully backwards compatible, and because
newer versions have lots of security and bug fixes, there is no good
reason not to install the newest nfs-utils and mount packages if you
are beginning an NFS setup.
All 2.4 and higher kernels have full NFS Version 3 functionality.
All kernels after 2.2.18 support NFS over TCP on the client side.
As of this writing, server-side NFS over TCP only exists in the
later 2.2 series (but not yet in the 2.4 kernels), is considered
experimental, and is somewhat buggy.
Because so many of the above functionalities were introduced in
kernel version 2.2.18, this document was written to be consistent
with kernels above this version (including 2.4.x). If you have an
older kernel, this document may not describe your NFS system
correctly.
As we write this document, NFS version 4 is still in development
as a protocol, and it will not be dealt with here.
2.5. Where to get help and further information
As of December 2000, the Linux NFS homepage is at
http://nfs.sourceforge.net. Please check there for NFS related
mailing lists as well as the latest version of nfs-utils, NFS
kernel patches, and other NFS related packages.
You may also wish to look at the man pages for nfs(5), exports(5),
mount(8), fstab(5), nfsd(8), lockd(8), statd(8), rquotad(8), and
mountd(8).
3. SETTING UP AN NFS SERVER
It is assumed that you will be setting up both a server and a
client. If you are just setting up a client to work off of
somebody else's server (say in your department), you can skip
to Section 4. However, every client that is set up requires
modifications on the server to authorize that client (unless
the server setup is done in a very insecure way), so even if you
are not setting up a server you may wish to read this section to
get an idea what kinds of authorization problems to look out for.
Setting up the server will be done in two steps: Setting up the
configuration files for NFS, and then starting the NFS services.
3.1 Setting up the Configuration Files
There are three main configuration files you will need to edit to
set up an NFS server: /etc/exports, /etc/hosts.allow, and
/etc/hosts.deny. Strictly speaking, you only need to edit
/etc/exports to get NFS to work, but you would be left with an
extremely insecure setup. You may also need to edit your startup
scripts; see Section 3.2.3 for more on that.
3.1.1. /etc/exports
This file contains a list of entries; each entry indicates a volume
that is shared and how it is shared. Check the man pages ("man
exports") for a complete description of all the setup options for
the file, although the description here will probably satistfy
most people's needs.
An entry in /etc/exports will typically look like this:
directory machine1(option11,option12) machine2(option21,option22)
where
- directory is the directory that you want to share. It may be an
entire volume though it need not be. If you share a directory,
then all directories under it within the same file system will
be shared as well.
- machine1 and machine2 are client machines that will have access
to the directory. The machines may be listed by their IP address
or their DNS address (e.g., machine.company.com or 192.168.0.8).
Using IP addresses is more reliable and more secure.
- the option listing for each machine will describe what kind of
access that machine will have. Important options are:
- ro: The directory is shared read only; the client machine
will not be able to write to it. This is the default.
- rw: The client machine will have read and write access to the
directory.
- no_root_squash: By default, any file request made by user root
on the client machine is treated as if it is made by user
nobody on the server. (Excatly which UID the request is
mapped to depends on the UID of user "nobody" on the server,
not the client.) If no_root_squash is selected, then
root on the client machine will have the same level of access
to the files on the system as root on the server. This
can have serious security implications, although it may be
necessary if you want to perform any administrative work on
the client machine that involves the exported directories.
You should not specify this option without a good reason.
- no_subtree_check: If only part of a volume is exported, a
routine called subtree checking verifies that a file that is
requested from the client is in the appropriate part of the
volume. If the entire volume is exported, disabling this check
will speed up transfers.
- sync: By default, a Version 2 NFS server will tell a client
machine that a file write is complete when NFS has finished
handing the write over to the filesysytem; however, the file
system may not sync it to the disk, even if the client makes
a sync() call on the file system. The default behavior may
therefore cause file corruption if the server reboots. This
option forces the filesystem to sync to disk every time NFS
completes a write operation. It slows down write times
substantially but may be necessary if you are running NFS
Version 2 in a production environment. Version 3 NFS has
a commit operation that the client can call that
actually will result in a disk sync on the server end.
Suppose we have two client machines, slave1 and slave2, that have IP
addresses 192.168.0.1 and 192.168.0.2, respectively. We wish to share
our software binaries and home directories with these machines.
A typical setup for /etc/exports might look like this:
/usr/local 192.168.0.1(ro) 192.168.0.2(ro)
/home 192.168.0.1(rw) 192.168.0.2(rw)
Here we are sharing /usr/local read-only to slave1 and slave2,
because it probably contains our software and there may not be
benefits to allowing slave1 and slave2 to write to it that outweigh
security concerns. On the other hand, home directories need to be
exported read-write if users are to save work on them.
If you have a large installation, you may find that you have a bunch
of computers all on the same local network that require access to
your server. There are a few ways of simplifying references
to large numbers of machines. First, you can give access to a range
of machines at once by specifying a network and a netmask. For
example, if you wanted to allow access to all the machines with IP
addresses between 192.168.0.0 and 192.168.0.255 then you could have
the entries:
/usr/local 192.168.0.0/255.255.255.0(ro)
/home 192.168.0.0/255.255.255.0(rw)
See the Networking-Overview HOWTO for further information about how
netmasks work, and you may also wish to look at the man pages for
init and hosts.allow.
Second, you can use NIS netgroups in your entry. To specify a
netgroup in your exports file, simply prepend the name of the
netgroup with an "@". See the NIS HOWTO for details on how
netgroups work.
Third, you can use wildcards such as *.foo.com or 192.168. instead
of hostnames.
However, you should keep in mind that any of these simplifications
could cause a security risk if there are machines in your netgroup
or local network that you do not trust completely.
A few cautions are in order about what cannot (or should not) be
exported. First, if a directory is exported, its parent and child
directories cannot be exported if they are in the same filesystem.
However, exporting both should not be necessary because listing the
parent directory in the /etc/exports file will cause all underlying
directories within that file system to be exported.
Second, it is a poor idea to export a FAT or VFAT (i.e., MS-DOS or
Windows 95/98) filesystem with NFS. FAT is not designed for use on a
multi-user machine, and as a result, operations that depend on
permissions will not work well. Moreover, some of the underlying
filesystem design is reported to work poorly with NFS's expectations.
Third, device or other special files may not export correctly to non-
Linux clients. See the Interoperability chapter for details on
particular operating systems.
3.1.2 /etc/hosts.allow and /etc/hosts.deny
These two files specify which computers on the network can use
services on your machine. Each line of the file is an entry listing
a service and a set of machines. When the server gets a request
from a machine, it does the following:
1. It first checks hosts.allow to see if the machine
matches a description listed in there. If it does, then the machine
is allowed access.
2. If the machine does not match an entry in hosts.allow, the
server then checks hosts.deny to see if the client matches a
listing in there. If it does then the machine is denied access.
3. If the client matches no listings in either file, then it
is allowed access.
In addition to controlling access to services handled by inetd (such
as telnet and FTP), this file can also control access to NFS
by restricting connections to the daemons that provide NFS services.
Restrictions are done on a per-service basis.
The first daemon to restrict access to is the portmapper. This daemon
essentially just tells requesting clients how to find all the NFS
services on the system. Restricting access to the portmapper is the
best defense against someone breaking into your system through NFS
because completely unauthorized clients won't know where to find the
NFS daemons. However, there are two things to watch out for. First,
restricting portmapper isn't enough if the intruder already knows
for some reason how to find those daemons. And second, if you are
running NIS, restricting portmapper will also restrict requests to NIS.
That should usually be harmless since you usually want
to restrict NFS and NIS in a similar way, but just be cautioned.
(Running NIS is generally a good idea if you are running NFS, because
the client machines need a way of knowing who owns what files on the
exported volumes. Of course there are other ways of doing this such
as syncing password files. See the NIS-HOWTO for information on
setting up NIS.)
In general it is a good idea with NFS (as with most internet services)
to explicitly deny access to hosts that you don't need to allow access
to.
The first step in doing this is to add the followng entry to
/etc/hosts.deny:
portmap:ALL
Starting with nfs-utils 0.2.0, you can be a bit more careful by
controlling access to individual daemons. It's a good precaution
since an intruder will often be able to weasel around the portmapper.
If you have a newer version of NFS-utils, add entries for each of the
NFS daemons (see the next section to find out what these daemons are;
for now just put entries for them in hosts.deny):
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL
Even if you have an older version of nfs-utils, adding these entries
is at worst harmless (since they will just be ignored) and at best
will save you some trouble when you upgrade. Some sys admins choose
to put the entry ALL:ALL in the file /etc/hosts.deny, which causes
any service that looks at these files to deny access to all
hosts unless it is explicitly allowed. While this is more secure
behavior, it may also get you in trouble when you are installing new
services, you forget you put it there, and you can't figure out for
the life of you why they won't work.
Next, we need to add an entry to hosts.allow to give any hosts
access that we want to have access. (If we just leave the above
lines in hosts.deny then nobody will have access to NFS.) Entries
in hosts.allow follow the format
service: host [or network/netmask] , host [or network/netmask]
Here, host is IP address of a potential client; it may be possible
in some versions to use the DNS name of the host, but it is strongly
deprecated.
Suppose we have the setup above and we just want to allow access
to slave1.foo.com and slave2.foo.com, and suppose that the IP
addresses of these machines are 192.168.0.1 and 192.168.0.2,
respectively. We could add the following entry to /etc/hosts.allow:
portmap: 192.168.0.1 , 192.168.0.2
For recent nfs-utils versions, we would also add the following
(again, these entries are harmless even if they are not supported):
lockd: 192.168.0.1 , 192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2
If you intend to run NFS on a large number of machines in a local
network, /etc/hosts.allow also allows for network/netmask style
entries in the same manner as /etc/exports above.
3.2. Getting the services started
3.2.1. Pre-requisites
The NFS server should now be configured and we can start it running.
First, you will need to have the appropriate packages installed.
This consists mainly of a new enough kernel and a new enough version
of the nfs-utils package. See Section 2.4 if you are in doubt.
Next, before you can start NFS, you will need to have TCP/IP
networking functioning correctly on your machine. If you can use
telnet, FTP, and so on, then chances are your TCP networking is fine.
That said, with most recent Linux distributions you may be able to
get NFS up and running simply by rebooting your machine, and the
startup scripts should detect that you have set up your /etc/exports
file and will start up NFS correctly. If you try this, see Section
3.3 Verifying that NFS is running. If this does not work, or if
you are not in a position to reboot your machine, then the following
section will tell you which daemons need to be started in order to
run NFS services. If for some reason nfsd was already running when
you edited your configuration files above, you will have to flush
your configuration; see Section 3.4 for details.
3.2.2 Starting the Portmapper
NFS depends on the portmapper daemon, either called portmap or
rpc.portmap. It will need to be started first. It should be
located in /sbin but is sometimes in /usr/sbin. Most recent Linux
distributions start this daemon in the boot scripts, but it is
worth making sure that it is running before you begin working with
NFS (just type "ps aux | grep portmap").
3.2.3 The Daemons
NFS serving is taken care of by five daemons: rpc.nfsd, which does
most of the work; rpc.lockd and rpc.statd, which handle file locking;
rpc.mountd, which handles the initial mount requests, and
rpc.rquotad, which handles user file quotas on exported volumes.
Starting with 2.2.18, lockd is called by nfsd upon demand, so you do
not need to worry about starting it yourself. statd will need to be
started separately. Most recent Linux distributions will
have startup scripts for these daemons.
The daemons are all part of the nfs-utils package, and may be either
in the /sbin directory or the /usr/sbin directory.
If your distribution does not include them in the startup scripts,
then then you should add them, configured to start in the following
order:
rpc.portmap
rpc.mountd, rpc.nfsd
rpc.statd, rpc.lockd (if necessary), rpc.rquotad
The nfs-utils package has sample startup scripts for RedHat and
Debian. If you are using a different distribution, in general you
can just copy the RedHat script, but you will probably have to take
out the line that says
. ../init.d/functions
to avoid getting error messages.
3.3 Verifying that NFS is running
To do this, query the portmapper with the command "rpcinfo -p" to
find out what services it is providing. You should get something
like this:
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 749 rquotad
100011 2 udp 749 rquotad
100005 1 udp 759 mountd
100005 1 tcp 761 mountd
100005 2 udp 764 mountd
100005 2 tcp 766 mountd
100005 3 udp 769 mountd
100005 3 tcp 771 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
300019 1 tcp 830 amd
300019 1 udp 831 amd
100024 1 udp 944 status
100024 1 tcp 946 status
100021 1 udp 1042 nlockmgr
100021 3 udp 1042 nlockmgr
100021 4 udp 1042 nlockmgr
100021 1 tcp 1629 nlockmgr
100021 3 tcp 1629 nlockmgr
100021 4 tcp 1629 nlockmgr
This says that we have NFS versions 2 and 3, rpc.statd version 1,
network lock manager (the service name for rpc.lockd) versions 1, 3,
and 4. There are also different service listings depending on
whether NFS is travelling over TCP or UDP. Linux systems use UDP
by default unless TCP is explicitly requested; however other OSes
such as Solaris default to TCP.
If you do not at least see a line that says "portmapper", a line
that says "nfs", and a line that says "mountd" then you will need
to backtrack and try again to start up the daemons (see Chapter 7,
Troubleshooting, if this still doesn't work).
If you do see these services listed, then you should be ready to
set up NFS clients to access files from your server.
3.4 Making changes to /etc/exports later on
If you come back and change your /etc/exports file, the changes you
make may not take effect immediately. You should run the command
exportfs -ra to force nfsd to re-read the /etc/exports file. If you
can't find the exportfs command, then you can kill nfsd with the
-HUP flag (see the man pages for kill for details).
If that still doesn't work, don't forget to check hosts.allow to
make sure you haven't forgotten to list any new client machines
there. Also check the host listings on any firewalls you may have
set up (see the Troubleshooting chapter for more details on firewalls
and NFS).
4. SETTING UP AN NFS CLIENT
4.1 Mounting remote directories
Before beginning, you should double-check to make sure your mount
program is new enough (version 2.10m if you want to use Version 3
NFS), and that the client machine supports NFS mounting, though most
standard distributions do. If you are using a 2.2 or later kernel
with the /proc filesystem you can check for NFS in two steps:
First, type "modprobe nfs" in case NFS was built as a kernel module,
to ensure that it is loaded. If modprobe gives you an error you can
simply ignore it. Now, check the file /proc/filesystems and make sure
there is a line containing nfs. If there is not, you will need to
build (or download) a kernel that has NFS support built in.
To begin using a machine as an NFS client, you will need the portmapper
running on that machine, and to use NFS file locking, you will
also need rpc.statd and rpc.lockd running on both the client
and the server. Most recent distributions start rpc.statd by
default at boot time; if yours doesn't, see Section 3.2 for
information on how to start it up. The kernel should start rpc.lockd
by itself as needed.
There is another daemon, rpciod, that runs on the client and works
to optimize the order in which network packets are sent and received.
It is started by the kernel and does not have any separate binary
files of its own, so you don't need to worry about it too much.
But at least now you know what it is in case you see it running.
With portmapper, lockd, and statd running, you should now be able to
mount the remote directory from your server just the way you mount
a local hard drive, with the mount command. Continuing our example
from the previous chapter, suppose our server above is called
master.foo.com,and we want to mount the /home directory on
slave1.foo.com. Then, all we have to do, from the root prompt on
slave1, is type:
# mount master.foo.com:/home /mnt/home
and the directory /home on master will appear as the directory
/mnt/home on slave1.
If this does not work, see the Troubleshooting chapter.
You can get rid of the file system by typing
# umount /mnt/home
just like you would for a local file system.
4.1 Getting NFS File Systems to Be Mounted at Boot Time
NFS file systems can be added to your /etc/fstab file the same way
local file systems can, so that they mount when your system starts
up. The only difference is that the file system type will be
set to "nfs" and the dump and fsck order (the last two entries) will
have to be set to zero. So for our example above, the entry in
/etc/fstab would look like:
# device mountpoint fs-type options dump fsckorder
...
master.foo.com:/home /mnt nfs rw 0 0
...
See the man pages for fstab if you are unfamiliar with the syntax
of this file. If you are using an automounter such as amd or autofs,
the options in the corresponding fields of your mount listings
should look very similar if not identical.
At this point you should have NFS working, though a few tweaks
may still be necessary to get it to work well. You should also
read Chapter 6 to be sure your setup is reasonably secure.
4.2. Mount options
4.2.1 Soft vs. Hard Mounting
There are some options you should consider adding at once. They
govern the way the NFS client handles a server crash or network
outage. One of the cool things about NFS is that it can handle this
gracefully. If you set up the clients right. There are two distinct
failure modes:
-soft: If a file request fails, the NFS client will report an
error to the process on the client machine requesting the file
access. Some programs can handle this with composure, most
won't. We do not recommend using this setting; it is a recipe
for corrupted files and lost data. You should especially not
use this for mail disks --- if you value your mail, that is.
- hard: The program accessing a file on a NFS mounted file system
will hang when the server crashes. The process cannot be
interrupted or killed (except by a "sure kill") unless you also
specify intr. When the NFS server is back online the program will
continue undisturbed from where it was. We recommend using hard,
intr on all NFS-mounted file systems.
4.2.2 Setting Block Size to Optimize Transfer Speeds
The rsize and wsize mount options specify the size of the chunks of
data that the client and server pass back and forth to each other.
The defaults may be too big or too small; there is no size that works
well on all or most setups. On the one hand, some combinations of
Linux kernels and network cards (largely on older machines) cannot
handle blocks that large. On the other hand, if they can handle
larger blocks, a bigger size might be faster.
Getting the block size right is an important factor in performance and
is a must if you are planning to use the NFS server in a production
environment. See the Performance chapter for details.
Picking up the from previous example, the fstab entry might now
look like:
# device mountpoint fs-type options dump fsckord
...
master.foo.com:/home /mnt/home nfs rw,hard,intr,rsize=8192,wsize=8192 0 0
...
4.3 Synchronizing File Permissions
Now that you have mounted a file system from a remote machine, you may
need to think about how access to the files on that system is determined.
NFS simply exports the UID and GID of the file owner, along with the
permissions for the owning user, the owning group, and everybody.
It trusts the client to know how to use that information correctly.
However, the client may not. For example, one of the home directories in
/home on master.foo.com may be that of user billybob who has
UID 105. However, on slave.foo.com, UID 105 is taken by user loribob.
In this case, loribob will be able to access all of bilibob's files
on the NFS-mounted partition as if they were her own, and indeed if one
typed "ls -l" on billybob's home directory from the client machine,
loribob would be listed as the owner. The same problem exists for group
permissions.
For this reason, it is extremely important when using NFS to have a
system for synchronizing UIDs and GIDs across machines. One such
system is NIS, which provides a means for client machines to lookup
UIDs and GIDs on the server. Describing how to set up NIS is well
beyond the scope of this document, however there is an NIS HOWTO
available.
A second option is to use a program like rsync (see
http://rsync.samba.org) to physically copy password files from the server
to the clients. This is more secure than NIS if set up correctly;
however, if you do not know what you are doing, you can run the risk
of wiping out the password file on the client machines and rendering
them temporarily unusable.
5. OPTIMIZING NFS PERFORMANCE
Getting network settings right can improve NFS performance many times
over -- a tenfold increase in transfer speeds is not unheard of.
The most important things to get right are the rsize and wsize mount
options. Other factors listed below may affect people with particular
hardware setups.
5.1. Setting Block Size to Optimize Transfer Speeds
The rsize and wsize mount options specify the size of the chunks of
data that the client and server pass back and forth to each other.
If no rsize and wsize options are specified, the default varies by
which version of NFS we are using. 4096 bytes is the most common
default, although for TCP-based mounts in 2.2 kernels, and for all
mounts beginning with 2.4 kernels, the server specifies the default
block size.
The defaults may be too big or too small. On the one hand, some
combinations of Linux kernels and network cards (largely on older
machines) cannot handle blocks that large. On the other hand, if they
can handle larger blocks, a bigger size might be faster.
So we'll want to experiment and find an rsize and wsize that works
and is as fast as possible. You can test the speed of your options
with some simple commands.
The first of these commands transfers 16384 blocks of 16k each from
the special file /dev/zero (which if you read it just spits out zeros
_really_ fast) to the mounted partition. We will time it to see
how long it takes. So, from the client machine, type:
# time dd if=/dev/zero of=/mnt/home/testfile bs=16k count=16384
This creates a 256Mb file of zeroed bytes. In general, you should
create a file that's at least twice as large as the system RAM
on the server, but make sure you have enough disk space! Then read
back the file into the great black hole on the client machine
(/dev/null) by typing the following:
# time dd if=/mnt/home/testfile of=/dev/null bs=16k
Repeat this a few times and average how long it takes. Be sure to
unmount and remount the filesystem each time (both on the client and,
if you are zealous, locally on the server as well), which should clear
out any caches.
Then umount, and mount again with a larger and smaller block size.
They should probably be multiples of 1024, and not larger than
8192 bytes since that's the maximum size in NFS version 2. (Though
if you are using Version 3 you might want to try up to 32768.)
Wisdom has it that the block size should be a power of two since most
of the parameters that would constrain it (such as file system block
sizes and network packet size) are also powers of two. However, some
users have reported better successes with block sizes that are not
powers of two but are still multiples of the file system block size
and the network packet size.
Directly after mounting with a larger size, cd into the mounted
file system and do things like ls, explore the fs a bit to make
sure everything is as it should. If the rsize/wsize is too large
the symptoms are very odd and not 100% obvious. A typical symptom
is incomplete file lists when doing 'ls', and no error messages.
Or reading files failing mysteriously with no error messages. After
establishing that the given rsize/wsize works you can do the speed
tests again. Different server platforms are likely to have different
optimal sizes. SunOS and Solaris is reputedly a lot faster with 4096
byte blocks than with anything else.
You can also use the ping command with the -f and -s options to see
if you are experiencing heavy packet loss. See the man page for
ping for details.
Remember to edit /etc/fstab to reflect the rsize/wsize you found.
5.2 Packet Size and Network Drivers
There are many shoddy network drivers available for Linux,
including for some fairly standard cards.
Try pinging back and forth between the two machines with large
packets using the -f and -s options with ping (see man ping)
for more details and see if a lot of packets get or if they
take a long time for a reply. If so, you may have a problem
with the performance of your network card.
To correct such a problem, you may wish to reconfigure the packet
size that your network card uses. Very often there is a constraint
somewhere else in the network (such as a router) that causes a
smaller maximum packet size between two machines than what the
network cards on the machines are actually capable of. TCP should
autodiscover the appropriate packet size for a network, but UDP
will simply stay at a default value. So determining the appropriate
packet size is especially important if you are using NFS over UDP.
You can test for the network packet size using the tracepath command:
From the client machine, just type "tracepath [server] 2049" and the
path MTU should be reported at the bottom. You can then set the
MTU on your network card equal to the path MTU, by using the MTU option
to ifconfig, and see if fewer packets get dropped. See the ifconfig man
pages for details on how to reset the MTU.
5.3 Number of Instances of NFSD
Most startup scripts, Linux and otherwise, start 8 instances of nfsd.
In the early days of NFS, Sun decided on this number as a rule of thumb,
and everyone else copied. There are no good measures of how many
instances are optimal, but a more heavily-trafficked server may require
more. If you are using a 2.4 or higher kernel and you want to see how
heavily each nfsd thread is being used, you can look at the file
/proc/net/rpc/nfsd. The last ten numbers on the "th" line in that
file indicate the number of seconds that the thread usage was at that
percentage of the maximum allowable. If you have a large number in the
top three deciles, you may wish to increase the number of nfsd
instances. This is done upon starting nfsd using the number of
instances as the command line option. See the nfsd man page for more
information.
5.4 Memory Limits on the Input Queue
On 2.2 and 2.4 kernels, the socket input queue, where requests
sit while they are currently being processed, has a small default
size limit of 64k. This means that if you are running 8 instances of
nfsd, each will only have 8k to store requests while it processes
them.
You should consider increasing this number to at least 256k for nfsd.
This limit is set in the proc file system using the files
/proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max.
It can be increased in three steps; the following method is a bit of
a hack but should work and should not cause any problems:
a. Increase the size listed in the file:
echo 262144 > /proc/sys/net/core/rmem_default
echo 262144 > /proc/sys/net/core/rmem_max
b. Restart nfsd, e.g., type "/etc/rc.d/init.d/nfsd restart"
on RedHat
c. Return the size limits to their normal size in case other kernel
systems depend on it:
echo 65536 > /proc/sys/net/core/rmem_default
echo 65536 > /proc/sys/net/core/rmem_max
Be sure to perform this last step because machines have been reported
to crash if these values are left changed for long periods of time.
5.5 Overflow of Fragmented Packets
The NFS protocol uses fragmented UDP packets. The kernel has
a limit of how many fragments of incomplete packets it can
buffer before it starts throwing away packets. With 2.2 kernels
that support the /proc filesystem, you can specify how many by
editing the files /proc/sys/net/ipv4/ipfrag_high_thresh and
/proc/sys/net/ipv4/ipfrag_low_thresh.
Once the number of unprocessed, fragmented packets reaches the
number specified by ipfrag_high_thresh (in bytes), the kernel
will simply start throwing away fragmented packets until the number
of incomplete packets reaches the number specified
by ipfrag_low_thresh. (With 2.2 kernels, the default is usually 256K).
This will look like packet loss, and if the high threshold is
reached your server performance drops a lot.
One way to monitor this is to look at the field IP: ReasmFails in the
file /proc/net/snmp; if it goes up too quickly during heavy file
activity, you may have problem. Good alternative values for
ipfrag_high_thresh and ipfrag_low_thresh have not been reported; if
you have a good experience with a particular value, please let the
maintainers and development team know.
5.6. Turning Off Autonegotiation of NICs and Hubs
Sometimes network cards will auto-negotiate badly with
hubs and switches and this can have strange effects.
Moreover, hubs may lose packets if they have different
ports running at different speeds. Try playing around
with the network speed and duplex settings.
5.7. Non-NFS-Related Means of Enhancing Server Performance
Offering general guidelines for setting up a well-functioning
file server is outside the scope of this document, but a few
hints may be worth mentioning: First, RAID 5 gives you good
read speeds but lousy write speeds; consider RAID 1/0 if both
write speed and redundancy are important. Second, using a
journalling filesystem will drastically reduce your reboot
time in the event of a system crash; as of this writing, ext3
(ftp://ftp.uk.linux.org/pub/linux/sct/fs/jfs/) was the only
journalling filesystem that worked correctly with
NFS version 3, but no doubt that will change soon.
In particular, it looks like Reiserfs should work with NFS version 3
on 2.4 kernels, though not yet on 2.2 kernels. Finally,
using an automounter (such as autofs or amd) may prevent
hangs if you cross-mount files on your machines (whether on
purpose or by oversight) and one of those machines goes down.
See the Automount Mini-HOWTO for details.
6. SECURITY AND NFS
This list of security tips and explanations will not make your site
completely secure. NOTHING will make your site completely secure. This
chapter may help you get an idea of the security problems with NFS.
It is not a comprehensive guide and it will always be undergoing
changes. If you have any tips or hints to give us please send them to
the HOWTO maintainer.
If you're on a network with no access to the outside world (not even a
modem) and you trust all the internal machines and all your users then
this section will be of no use to you. However, it's our belief that
there are relatively few networks in this situation so we would suggest
reading this section thoroughly for anyone setting up NFS.
There are two steps to obtaining access to data via NFS. The first step
is mount access. Mount access is achieved by the client machine
attempting to attach to the server. The security for this is provided
by the /etc/exports file. This file lists the names or ip addresses for
machines that are allowed to access a share point. If the client's ip
address matches one of the entries in the access list then it will be
allowed to mount. This is not terribly secure. If someone is capable of
spoofing or taking over a trusted address then they can access your
mount points. To give a real-world example of this type of
"authentication": This is equivalent to someone introducing themselves
to you and you believing they are who they claim to be because they are
wearing a sticker that says "Hello, My Name is ...."
The second step is file access. This is a function of normal file system
access controls and not a specialized function of NFS. Once the drive is
mounted the user and group permissions on the files take over access
control.
An example: bob on the server maps to the UserID 9999. Bob
makes a file on the server that is only accessible to the user (0600 in
octal). A client is allowed to mount the drive where the file is stored.
On the client mary maps to UserID 9999. This means that the client
user mary can access bob's file that is marked as only accessible by him.
It gets worse, if someone has root on the client machine they can 'su -
[username]' and become ANY user. NFS will be none the wiser.
It's not all terrible. There are a few measures you can take on the
server to offset the danger of the clients. We will cover those shortly.
If you don't think the security measures apply to you, you're probably
wrong. In the next section, 6.1, we'll cover securing the portmapper,
server and client security in sections 6.2 and 6.3 respectively.
Finally, in section 6.4 we'll briefly talk about proper firewalling for
your NFS server.
Finally, it is critical that all of your nfs daemons and client programs
are current. If you think that a flaw is too recently announced for it to
be a problem for you, then you've probably already been compromised.
A good way to keep up to date on security alerts is to subscribe to the
bugtraq mailing lists. You can read up on how to subscribe and various
other information about bugtraq here:
http://www.securityfocus.com/forums/bugtraq/faq.html
Additionally searching for NFS at securityfocus.com's search engine will
show you all security reports pertaining to NFS.
You should also regularly check CERT advisories. See the CERT web page
at www.cert.org.
6.1. The portmapper
The portmapper keeps a list of what services are running on what ports.
This list is used by a connecting machine to see what ports it wants to
talk to in order to access certain services.
The portmapper is not in as bad a shape as a few years ago but it is
still a point of worry for many sys admins. The portmapper, like NFS and
NIS, should not really have connections made to it outside of a trusted
local area network. If you have to expose these daemons to the outside
world - be careful and keep up diligent monitoring of your systems.
Not all Linux distributions were created equal. Some seemingly up-to-
date distributions do not include a securable portmapper.
The easy way to check if your portmapper is good or not is to run
strings(1) and see if it reads the relevant files, /etc/hosts.deny and
/etc/hosts.allow. Assuming your portmapper is /sbin/portmap you can
check it with this command: "strings /sbin/portmap | grep hosts".
On a securable machine it comes up something like this:
/etc/hosts.allow
/etc/hosts.deny
@(#) hosts_ctl.c 1.4 94/12/28 17:42:27
@(#) hosts_access.c 1.21 97/02/12 02:13:22
First we edit /etc/hosts.deny. It should contain the line
portmap: ALL
which will deny access to everyone. While it is closed run
'rpcinfo -p' just to check that your portmapper really reads and obeys
this file. rpcinfo should give no output, or possibly an error message.
The files /etc/hosts.allow and /etc/hosts.deny take effect immediately
after you save them. No daemon needs to be restarted.
Closing the portmapper for everyone is a bit drastic, so we open it
again by editing /etc/hosts.allow. But first we need to figure out
what to put in it. It should basically list all machines that should
have access to your portmapper. On a run-of-the-mill Linux system
there are very few machines that need any access for any reason. The
portmapper administrates nfsd, mountd, ypbind/ypserv, pcnfsd, and 'r'
services like ruptime and rusers. Of these only nfsd, mountd,
ypbind/ypserv and perhaps pcnfsd are of any consequence. All machines
that need to access services on your machine should be allowed to do
that. Let's say that your machine's address is 192.168.0.254 and
that it lives on the subnet 192.168.0.0, and that all machines on the
subnet should have access to it (those are terms introduced by the
Networking-Overview HOWTO, go back and refresh your memory if you need
to). Then we write
portmap: 192.168.0.0/255.255.255.0
in hosts.allow. This is the same as the network address you give to
route and the subnet mask you give to ifconfig. For the device eth0
on this machine ifconfig should show
...
eth0 Link encap:Ethernet HWaddr 00:60:8C:96:D5:56
inet addr:192.168.0.254 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:360315 errors:0 dropped:0 overruns:0
TX packets:179274 errors:0 dropped:0 overruns:0
Interrupt:10 Base address:0x320
...
and netstat -rn should show
Kernel routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
...
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 174412 eth0
...
(Network address in first column).
The hosts.deny and hosts.allow files are described in the manual pages
of the same names.
IMPORTANT: Do not put anything but IP NUMBERS in the portmap lines of
these files. Host name lookups can indirectly cause portmap activity
which will trigger host name lookups which can indirectly cause
portmap activity which will trigger...
Versions 0.2.0 and higher of the nfs-utils package also use the
hosts.allow and hosts.deny files, so you should put in entries for
lockd, statd, mountd, and rquotad in these files too.
The above things should make your server tighter. The only remaining
problem (Yeah, right!) is someone breaking root (or booting MS-DOS) on a
trusted machine and using that privilege to send requests from a
secure port as any user that person wants to be.
6.2. Server security: nfsd and mountd
On the server we can decide that we don't want to trust the client's
root account. We can do that by using the root_squash option in
exports:
/home slave1(rw,root_squash)
This is, in fact, the default. It should always be turned on unless you
have a VERY good reason to turn it off. To turn it off use the
no_root_squash option.
Now, if a user with UID 0 (i.e., root's user ID number) on the client
attempts to access (read, write, delete) the file system, the server
substitutes the UID of the server's `nobody' account. Which means that
the root user on the client can't access or change files that only root
on the server can access or change. That's good, and you should
probably use root_squash on all the file systems you export. "But the
root user on the client can still use 'su' to become any other user and
access and change that user's files!" say you. To which the answer is:
Yes, and that's the way it is, and has to be with Unix and NFS. This
has one important implication: All important binaries and files should be
owned by root, and not bin or other non-root account, since the only
account the clients root user cannot access is the server's root
account. In the exports(5) man page there are several other squash
options listed so that you can decide to mistrust whomever you (don't)
like on the clients. This man page also describes options to squash any
UID and GID range you want to. However these man pages jump a bit
ahead of the actual implemenation; as of nfs-utils 0.2.1, only root-
and all-squashing actually work,
The TCP ports 1-1024 are reserved for root's use (and therefore sometimes
referred to as "secure ports"). A non-root user cannot bind these ports.
Adding the secure option to an /etc/exports entry forces it to run on a
port below 1024, so that a malicious non-root user cannot come along and
open up a spoofed NFS dialogue on a non-reserved port. This option is set
by default.
6.3. Client Security
6.3.1. The nosuid mount option
On the client we can decide that we don't want to trust the server too
much a couple of ways with options to mount. For example we can
forbid suid programs to work off the NFS file system with the nosuid
option. Some unix programs, such as passwd, are called "suid" programs:
They set the id of the person running them to whomever is the owner of
the file. If a file is owned by root and is suid, then the program will
execute as root, so that they can perform operations (such as writing to
the password file) that only root is allowed to do. Using the nosuid
option is a good idea and you should consider using this with all NFS
mounted disks. It means that the server's root user cannot make a suid-
root program on the file system, log in to the client as a normal user
and then use the suid-root program to become root on the client too.
One could also forbid execution of files on the mounted file system
altogether with the noexec option. But this is more likely to be
impractical than nosuid since a file system is likely to at least
contain some scripts or programs that need to be executed.
6.3.1 The broken_suid mount option
Some older programs (xterm being one of them) used to rely on the idea
that root can write everywhere. This is will break under new kernels on
NFS mounts. The security implications are that programs that do this
type of suid action can potentially be used to change your apparent uid
on NFS servers doing uid mapping. So the default has been to disable this
"broken_suid" in the linux kernel.
The long and short of it is this: If you're using an old linux
distribution, some sort of old setuid program or an older unix of some
type you _might_ have to mount from your clients with the broken_suid
option to mount. However, most recent unixes and linux distros have
xterm and such programs just as a normal executable with no setuid
status; they call programs to do their setuid work.
You enter the above options in the options column, with the rsize and
wsize, separated by commas.
6.3.3. Securing portmapper, rpc.statd, and rpc.lockd on the client
In the current (2.2.18+) implementation of NFS, full file locking is
supported. This means that rpc.statd and rpc.lockd must be running on
the client in order for locks to function correctly. These services
require the portmapper to be running. So, most of the problems you will
find with NFS on the server you may also be plagued with on the client.
Read through the portmapper section above for information on securing
the portmapper.
6.4. NFS and firewalls (ipchains and Netfilter)
IPchains (under the 2.2.X kernels) and netfilter (under the 2.4.x
kernels) allow a good level of security - instead of relying on the
daemon (or in this case the tcp wrapper) to determine who can connect,
the connection attempt is allowed or disallowed at a lower level. In
this case you can stop the connection much earlier and more globaly which
can protect you from all sorts of attacks.
Describing how to set up a Linux firewall is well beyond the scope of
this document. Interested readers may wish to read the Firewall-HOWTO
or the IPCHAINS-HOWTO. For users of kernel 2.4 and above you might want
to visit the netfilter webpage at: http://netfitler.filewatcher.org.
If you are already familiar with the workings of ipchains or netfilter
this section will give you a few tips on how to better set up your
firewall to work with NFS.
A good rule to follow for your firewall configuration is to deny all, and
allow only some - this helps to keep you from accidentally allowing more
than you intended.
Ports to be concerned with:
a. The portmapper is on 111. (tcp and udp)
b. nfsd is on 2049 and it can be TCP and UDP. Although NFS over TCP
is currently experimental on the server end and you will usually
just see UDP on the server, using TCP is quite stable on the
client end.
c. mountd, lockd, and statd float around (which is why we need the
portmapper to begin with) - this causes problems.
You basically have two options to deal with it:
i. You more can more or less do a deny all on connecting ports
but explicitly allow most ports for certain ips.
ii. More recent versions of these utilities have a "-p" option
that allows you to assign them to a certain port. See the
man pages to be sure if your version supports this. You can
then allow access to the ports you have specified for your
NFS client machines, and seal off all other ports, even for
your local network.
Using IPCHAINS, a simply firewall using the first option would look
something like this:
ipchains -A input -f -j ACCEPT
ipchains -A input -s trusted.net.here/trusted.netmask -d host.ip/255.255.255.255 -j ACCEPT
ipchains -A input -s 0/0 -d 0/0 -p 6 -j DENY -y -l
ipchains -A input -s 0/0 -d 0/0 -p 17 -j DENY -l
The equivalent set of commands in netfilter (the firewalling tool
in 2.4) is:
iptables -A INPUT -f -j ACCEPT
iptables -A INPUT -s trusted.net.here/trusted.netmask -d \
host.ip/255.255.255.255 -j ACCEPT
iptables -A INPUT -s 0/0 -d 0/0 -p 6 -j DENY --syn --log-level 5
iptables -A INPUT -s 0/0 -d 0/0 -p 17 -j DENY --log-level 5
The first line says to accept all packet fragments (except the first
packet fragment which will be treated as a normal packet). In theory
no packet will pass through until it is reassembled, and it won't be
reassembled unless the first packet fragment is passed. Of course
there are attacks that can be generated by overloading a machine
with packet fragments. But NFS won't work correctly unless you
let fragments through. See Section 7.7 for details.
The other three lines say trust your local networks and deny and log
everything else. It's not great and more specific rules pay off, but
more specific rules are outside of the scope of this discussion.
Some pointers if you'd like to be more paranoid or strict about your
rules. If you choose to reset your firewall rules each time statd,
rquotad, mountd or lockd move (which is possible) you'll want to make
sure you allow fragments to your NFS server FROM your NFS client(s).
If you don't you will get some very interesting reports from the kernel
regarding packets being denied. The messages will say that a packet
from port 65535 on the client to 65535 on the server is being denied.
Allowing fragments will solve this.
6.5. Summary
If you use the hosts.allow/deny, root_squash, nosuid and privileged
port features in the portmapper/nfs software you avoid many of the
presently known bugs in nfs and can almost feel secure about that at
least. But still, after all that: When an intruder has access to your
network, s/he can make strange commands appear in your .forward or
read your mail when /home or /var/mail is NFS exported. For the
same reason, you should never access your PGP private key over NFS.
Or at least you should know the risk involved. And now you know a bit
of it.
NFS and the portmapper makes up a complex subsystem and therefore it's
not totally unlikely that new bugs will be discovered, either in the
basic design or the implementation we use. There might even be holes
known now, which someone is abusing. But that's life.
7. TROUBLESHOOTING
This is intended as a step-by-step guide to what to do when
things go wrong using NFS. Usually trouble first rears its
head on the client end, so this diagnostic will begin there.
Symptom 1: Unable to See Files on a Mounted File System
First, check to see if the file system is actually mounted.
There are several ways of doing this. The most reliable
way is to look at the file /proc/mounts, which will list all
mounted filesystems and give details about them. If
this doesn't work (for example if you don't have the /proc
filesystem compiled into your kernel), you can type
mount -f although you get less information.
If the file system appears to be mounted, then you may
have mounted another file system on top of it (in which
case you should unmount and remount both volumes), or you
may have exported the file system on the server before you
mounted it there, in which case NFS is exporting the underlying
mount point (if so then you need to restart NFS on the
server).
If the file system is not mounted, then attempt to mount it.
If this does not work, see Symptom 3.
Symptom 2: File requests hang or timeout waiting for access to the file
This usually means that the client is unable to communicate with
the server. See Symptom 3b.
Symptom 3: Unable to mount a file system
There are two common errors that mount produces when
it is unable to mount a volume. These are:
a. failed, reason given by server: Permission denied
This means that the server does not recognize that you
have access to the volume.
i. Check your /etc/exports file and make sure that the
volume is exported and that your client has the right
kind of access to it. For example, if a client only
has read access then you have to mount the volume
with the ro option rather than the rw option.
ii. Make sure that you have told NFS to register any
changes you made to /etc/exports since starting nfsd
by running the exportfs command. Be sure to type
exportfs -ra to be extra certain that the exports are
being re-read.
iii.Check the file /proc/fs/nfs/exports and make sure the
volume and client are listed correctly. (You can also
look at the file /var/lib/nfs/xtab for an unabridged
list of how all the active export options are set.) If they
are not, then you have not re-exported properly. If they
are listed, make sure the server recognizes your
client as being the machine you think it is. For
example, you may have an old listing for the client
in /etc/hosts that is throwing off the server, or
you may not have listed the client's complete address
and it may be resolving to a machine in a different
domain. Try to ping the client from the server, and try
to ping the server from the client. If this doesn't work,
or if there is packet loss, you may have lower-level network
problems.
b. RPC: Program Not Registered (or another "RPC" error):
This means that the client does not detect NFS running
on the server. This could be for several reasons.
i. First, check that NFS actually is running on the
server by typing rpcinfo -p on the server.
You should see something like this:
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 749 rquotad
100011 2 udp 749 rquotad
100005 1 udp 759 mountd
100005 1 tcp 761 mountd
100005 2 udp 764 mountd
100005 2 tcp 766 mountd
100005 3 udp 769 mountd
100005 3 tcp 771 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
300019 1 tcp 830 amd
300019 1 udp 831 amd
100024 1 udp 944 status
100024 1 tcp 946 status
100021 1 udp 1042 nlockmgr
100021 3 udp 1042 nlockmgr
100021 4 udp 1042 nlockmgr
100021 1 tcp 1629 nlockmgr
100021 3 tcp 1629 nlockmgr
100021 4 tcp 1629 nlockmgr
This says that we have NFS versions 2 and 3, rpc.statd
version 1, network lock manager (the service name for
rpc.lockd) versions 1, 3, and 4. There are also different
service listings depending on whether NFS is travelling over
TCP or UDP. UDP is usually (but not always) the default
unless TCP is explicitly requested.
If you do not see at least portmapper, nfs, and mountd, then
you need to restart NFS. If you are not able to restart
successfully, proceed to Symptom 9.
ii. Now check to make sure you can see it from the client.
On the client, type rpcinfo -p [server] where [server]
is the DNS name or IP address of your server.
If you get a listing, then make sure that the type
of mount you are trying to perform is supported. For
example, if you are trying to mount using Version 3
NFS, make sure Version 3 is listed; if you are trying
to mount using NFS over TCP, make sure that is
registered. (Some non-Linux clients default to TCP).
See man rpcinfo for more details on how
to read the output. If the type of mount you are
trying to perform is not listed, try a different
type of mount.
If you get the error No Remote Programs Registered,
then you need to check your /etc/hosts.allow and
/etc/hosts.deny files on the server and make sure
your client actually is allowed access. Again, if the
entries appear correct, check /etc/hosts (or your
DNS server) and make sure that the machine is listed
correctly, and make sure you can ping the server from
the client. Also check the error logs on the system
for helpful messages: Authentication errors from bad
/etc/hosts.allow entries will usually appear in
/var/log/messages, but may appear somewhere else depending
on how your system logs are set up. The man pages
for syslog can help you figure out how your logs are
set up. Finally, some older operating systems may
behave badly when routes between the two machines
are asymmetric. Try typing "tracepath [server]" from
the client and see if the word "asymmetric" shows up
anywhere in the output. If it does then this may
be causing packet loss. However asymmetric routes are
not usually a problem on recent linux distributions.
If you get the error Remote system error - No route
to host, but you can ping the server correctly,
then you are the victim of an overzealous
firewall. Check any firewalls that may be set up,
either on the server or on any routers in between
the client and the server. Look at the man pages
for ipchains, netfilter, and ipfwadm, as well as
the IPChains-HOWTO and the Firewall-HOWTO for help.
Symptom 4: I do not have permission to access files on the
mounted volume.
This could be one of two problems.
If it is a write permission problem, check the export
options on the server by looking
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic