How to install NFSD under SlugOS/BE (aka OpenSlug)
This HowTo might be a little bit chaotic and incomplete. It took me a little bit of tweaking to get things running, and some steps might not be needed or out of order. If you use this HowTo and find flaws please correct!
(Note: "nfsd", described here, is the module for providing an NFS Server on the slug. To mount files on the slug from other servers, the nodule you need is "nfs".)
The following steps are needed
- Make sure that your slug is up to date (
ipkg update; ipkg upgrade).
- Install nfsd (
ipkg install kernel-module-nfsd)
- Install nfs-utils (
ipkg install nfs-utils). You might also want to install
- Install module-init-tools (
ipkg install module-init-tools). This will give you a better working modprobe.
- create a file
/etc/exports. The format of this file is
<directory to share> <computer to share to>(options).
where 192.168.1.4 is the PC to which you want to export the directory.
- either restart the box, or execute the command
- start the daemons by
sh /etc/init.d/nfsserver start
On the PC with IP address 192.168.1.4 you can just say
mount 192.168.1.77:/usr/public /mnt assuming your slug is on 192.168.1.77. You can of course also use the name of the slug instead of the IP address.
Note: i wasn't able to install nfsd, because update.d-rc was missing. If it happend, try "export PATH=$PATH:/user/sbin". If it works, add this path in your /etc/profile
Some additional options for the
The example above is a fairly secure way to export the
/usr/public directory - it permits only the computer at IP address 192.168.1.4 to mount the
/usr/public directory. In cases where multiple computers wish to mount the directory, or if you are using DHCP where you can't guarantee exactly which IP address a computer will be using from day-to-day, a slightly different syntax can be used to specify a range of accepted IP addresses. For example:
specifies that any IP address in the range 192.168.1.0 to 192.168.1.255 will be allowed. If you prefer the traditional netmask way of expressing this, this format:
(rw) option should be self-explanatory; it can be replaced by
(ro) in order to ensure that other systems can only mount this exported directory for read access.
NFS and the super-user (root) have an interesting relationship, one which can trip up unsuspecting users. Specifically, there's no guarantee that the owner of the root account on an NFS client should have the same root privileges on the NFS server. In order to make sure that the client does not gain more privileges than it should, NFS maps the client's root user to the "nobody" user on the server. This behavior is generally regarded as a feature, but it may not be how you wish your network to operate. If you can say that the root user in your network is "trusted", then you can explicitly inform NFS that it is not to perform this mapping; i.e. the root account on the client is to have the full privileges accorded with the superuser on the mounted filesystem:
Fine Print: a note on permissions with NFS in general. NFS uses the user numbers and group numbers associated with a process, not the textual user ids or textual group names. Basically this means that if you do not keep your passwd files in close syncronization, you can introduce a great deal of confusion into your network, if not an outright security problem. For example, assume user id "fred" is added on one system and assigned UID number 501, while on another system somewhere, user id "ethel" is added and is given the UID number 501 on that system. Since NFS uses only the user id, on NFS filesystem shared between these two systems fred and ethel have the same privileges -- the two users are one-and-the-same from the NFS filesystem's point-of-view. This is easily avoided by making sure that the UID numbers and GID numbers in the passwd files are consistent across the various system in the network.
Currently with nfs-utils version 1.0.6-r2, these additional steps must be taken:
- mkdir /var/lib/nfs
- touch /var/lib/rmtab
- touch /etc/default/nfsd (actually this step might not be needed)
(Not required as of ver 1.0.6-r3)
To get NFSD to automatically start at boot time:
(Not required as of ver 1.0.6-r3)
Run the command "
update-rc.d nfsserver defaults" which will create the scripts to start it in levels 2-5 and stop it in 0, 1 and 6. Some documentation on
update-rc.d is here http://wiki.linuxquestions.org/wiki/Update-rc.d
Problem with NFSD
If you have the problem when you type
/etc/init.d/nfsserver start and the following line is in the log :
Sep 30 15:49:07 (none) daemon.err nfsd: nfssvc: No such device
You must launch the command :
Hanging on Startup
If /etc/init.d/nfsserver start is hanging at "starting 8 nfsd kernel threads:" every time, it helped for me to reinstall portmap (I know it is already installed, but something apparently went wrong):
ipkg -force-reinstall install portmap
After that, the NFS Server started flawlessly for me.
Portmap tip #2
If you get a message like this in dmesg when (re)starting portmap and nfs:
RPC: failed to contact portmap (errno -5).
then install portmap-utils and restart portmap/nfs again.
Some exportfs problems
This applies whether you are using an
/etc/exports file, or whether you
are using separate
exportfs commands. Essentially, for each file to be
exported, you have to specify 3 things:
- The host(s) to export it to ('*' means <world>);
- The directory or file to export;
- The options to export it with.
exportfs -o ro,sync,no_root_squash clientname:/media/sda1
exportfs command always succeeds (bar obvious syntactic glitches), and
records any remotely sensible request in
/var/lib/nfs/etab, and not in
xtab, as some manual pages seem to imply. If something does appear in
xtab it indicates that it got as far
as trying to tell the kernel about it. So if, for example, the thing to be
exported does not exist, it goes in
There is a further file
rmtab which is supposed to keep count of what
external clients have actually been mounted so that, if SlugOS crashes, those
clients will continue to see those things as soon as it is rebooted. However,
Linux sometimes fails to keep
rmtab up-to-date, as we shall see.
There are two modes in which
exportfs can be used:
- legacy mode (all systems prior to Linux 2.6, and even some after that, including SlugOS in its default state);
- new-cache mode.
See the Man page for
In legacy mode, if the path to the object to be exported exists,
exportfs immediately sends the export request to the kernel, and records
the fact in
/proc/lib/nfs/xtab. If it does not exist, it just goes in
etab on the offchance that you
will create it later. This can be handy if you subsequently hotplug a USB stick
/media/sda1 to be created. Nothing happens immediately, but
as soon as
clientname tries to mount it, the mountd will sort it all
out, and it will then appear in
In new-cache mode, no request is ever sent to the kernel until some
clientname attempts to mount it. So, although the
xtab file still
exists, it is always empty.
To use your slug in new-cache mode, include the following in
nfsd /proc/fs/nfsd nfsd defaults 0 0
(Note: if you do that manually later on, you will need to restart the mountd
by giving "
/etc/init.d/nfsserver restart"). In new-cache mode, you will
find various interesting things in
/proc/fs/nfsd, including a file
exports which indicates what the kernel had actually exported.
Note however that, although new-cache mode works fine for exporting,
mounting and reading/writing files from remote clients, it sometimes records
more mounts in
rmtab than have actually happened, leading possibly to
exporting too much when rebooting after a crash. This is a bug in Linux
which they have not yet fixed (and seem unsure whether and how to fix) so if
that matters to you, then stick with legacy mode.
So if you exported something (and especially if, in legacy mode, it got into
xtab) does this mean it got, or will get, exported? No! Not if the
kernel takes a dislike to it. You will get an accurate view of what the kernel
thinks if you look in
/proc/net/rpc/nfsd.export/content (or in
/proc/fs/nfsd/exports in new-cache mode), but to find out
why something is not there, you will have to go look in
HowTo.ProbingTheKernel. So surely all this uncertainty is a bug in
exportfs? Not so; it was intended that way - it is a "feature".
So here are some of the things that the kernel dislikes:
- Anything other than a file or a directory (so no devices, or fifos, or streams, or other funnies, though soft links work if the thing at the far end works). Note the thing that you export does not have to be a complete filesystem; once a filesystem is mounted somewhere, you can export any file or directory within it.
- Things such as
/sys, etc, which are not genuine files at all.
- Certain types of filesystem which NFSD just does not understand, notably
- NFS, SMBFS, NCPFS, CODA, AFS. These are all either mounted from elswhere, or are part of some distributed file system; either way, they are not stored on this server, and the client would be better off mounting them directly from elsewhere.
- Filesystems where nfsd support has not been coded in. The only example likely to be encountered on a slug is JFFS2, which is used for the onboard Flash (so don't expect to be able to mount that from outside, though it is rumoured that this is being worked on). The other examples are mostly weird or obsolete systems unlikely to be encounted.
- So here, by contrast, are the ones that do work:
REISERFS OCFS2 NTFS JFS ISOFS GFS2 FAT/VFAT/MSDOS EXT2/3/4 EFS CIFS TMPFS XFS
But there is another hurdle to overcome. Anything you want to export is either
itself a mount point, or one of its parents will be a mount point (ultimately,
even root (
/) must be mounted somewhere). If that mount point is a genuine
device (i.e. you can find it in
/dev), well and good - it can
construct a filehandle out off its major and minor device numbers. But if
the mount point is not a device (anything on
TMPFS is the commonest
example), then you have to specify a unique fsid number yourself, e.g.
exportfs -o sync,fsid=2 clientname:/tmp
(which actually exports
/var/volatile/tmp, of course). To check that you do
not use the same number twice, you should look in
/proc/net/rpc/nfsd.fh/content where you will see something like
#domain fsidtype fsid [path]
clientname 0 0x0008000100000001 /media/sda1
clientname 1 0x00000002 /var/volatile/tmp
clientname 1 0x00000000 /
which shows the normal fsidtype 0 for the
/media/sda1 from my first example,
on which the device
/dev/sda1 (a USB stick) had been mounted
earlier, together with the fsidtype 1 (for numeric fsids) from my
example. Note that the fsid value zero is reserved for the root of the whole
filesystem, as also shown.
To see all the filesyetem types your system currently knows about, look in
/proc/filesystems, which marks with "
nodev" all those which will need
NFS Version 4
Currently, SlugOS (and indeed most/all Linux systems) uses NFSv3 when acting
as an NFS client to mount files from outside. However, we are concerned here
with SlugOS acting as an NFS Server, in which case it may well encounter
outside clients that will try and connect to it using NFSv4. This is not yet a
supported feature (as of SlugOS 4.8) though Linux claims to contain all that
is necessary for the purpose. In fact, it does work sometimes (and would likely
work even better if SlugOS were to upgrade to using version 1.0.8 or later of
nfs-utils). However, the first thing to ensure is that root (
fsid=0 (which I mentioned above was the conventional value).
If you do not actually need to export
/, exporting it to the fictitious
fsid=0 might be sufficient.
Note also that the problems with
rmtab are likley to be even worse with
NFSv4, so it is not (yet) for the faint-hearted.