4 4
Gravelrash

HOWTO : NFS Server and Client Configuration

Recommended Posts

We will be configuring in a mode that allows both NFSv3 and NFSv4 clients to connect to access to the same share.

 

Q. What are the benefits of an NFS share over a SAMBA share?

A. This would take an article in itself! But as a brief reasoning, In a closed network (where you know every device), NFS is a fine choice. With a good network, throughput it disgustingly fast and at the same time less CPU intensive on the server. There are a lot of applications that support NFS "Out of the box" KODI being a common example. It's very simple to set up and you can toggle readonly on shares you don't need to be writeable. It only gets complicated if you want to start layering on security through LDAP/gssd. It's capable of very complex and complete security mechanisms... But you don't need them in this scenario.

 

Q.   Why are we doing this and not limiting to NFSv4?

A.   Flexibility, this will allow almost any device you have that supports reading and mounting NFS shares to connect and will also allow the share to be used to boot your clients from which will allow fast testing of new kernels etc.

 

Q. Why would I want to do this?

A. You can boot your dev boards from the NFS share to allow you to test new kernels quickly and simply

A. You can map your shared locations in Multimedia clients quickly and easily.

A. Your friends like to struggle with SAMBA so you can stay native and up your "geek cred"

 

This HOWTO: will be split into distinct areas,

     "Section One" Install and Configure the server
     "Section Two" Configure client access.
     "Section Three" Boot from NFS share. (a separate document in its own right that will be constructed shortly).

 

 

"Section One" Install and Configure the server

Install NFS Server and Client

apt-get update ; apt-get upgrade ; apt-get install autofs nfs-kernel-server nfs-common --install-recommends -f -y ;

Now reboot

sync ; reboot ;

I will be making the following assumptions (just amend to suit your environment)
1. You already have a working system!
2. Your media to be shared is mounted via fstab by its label, in this case Disk1
3. The mounted disk resides in the following folder /media/Disk1
4. this mount has a folder inside called /Data

Configure NFS Server (as root)
cp -a /etc/exports /etc/exports.backup

Then open and edit the /etc/exports file using your favourite editing tool:
Edit and comment out ANY and ALL existing lines by adding a “#†in front the line.

Setup NFS share for each mounted drive/folder you want to make available to the client devices

the following will allow both v3 and v4 clients to access the same files

# Export media to allow access for v3 and v4 clients
/media/Disk1/Data *(rw,sync,insecure,no_root_squash,no_subtree_check,nohide) 

Explanation of above chosen parameters

rw – allows read/write if you want to be able to delete or rename file
async – Increases read/write performance. Only to be used for non-critical files.
insecure – This setting allow clients (eg. Mac OS X) to use non-reserved ports to connect to the NFS server.
no_subtree_check – Improves speed and reliability by eliminating permission checks on parent directories.
nohide - makes it visible
no_root_squash - *enables* write access for the connecting device root use on a NFS share

 

Further explanations if you so desire can be found in the man page for NFS or from the following link

http://linux.die.net/man/5/exports

Starting / Stopping / Restarting NFS
Once you have setup NFS server, you can Start / Stop / Restart NFS using the following command as root

# Restart NFS server
service nfs-kernel-server restart

# Stop NFS server
service nfs-kernel-server stop

# Start NFS server
# needed to disconnect already connected clients after changes have been made
service nfs-kernel-server start

Any time you make changes to /etc/exports in this scenario it is advisable to re export your shares using the following command as root

export -ra

Ok now we have the shares setup and accessible, we can start to use this for our full fat linux/mac clients and or configure our other "dev boards"  to boot from the NFS share(s).


"Section Two" Configure client access.

 

We now need to interrogate our NFS server to see what is being exported, use the command showmount and the servers ip address

showmount -e "192.168.0.100"
Export list for 192.168.0.100:
/mnt/Disk1 *

In this example it shows that the path we use to mount or share is "/mnt/Disk1" (and any folder below that is accessible to us)

NFS Client Configuration v4 - NFSv4 clients must connect with relative address
Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)

mount -t nfs -o vers=4 192.168.xxx.xxx:/ /home/â€your user nameâ€/nfs4 

The mount point directory /home/â€your user nameâ€/nfs4 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs4 directory.

 

Another way to mount an NFS share from your *server is to add a line to the /etc/fstab file. The basic needed is as below

192.168.xxx.xxx:/    /home/â€your user nameâ€/nfs4    nfs    auto    0    0

NFS Client Configuration v3 - NFSv3 clients must use full address
Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)

mount -t nfs -o vers=3 192.168.xxx.xxx:/media/Disk1/Data /home/â€your user nameâ€/nfs3 

The mount point directory /home/â€your user nameâ€/nfs3 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs3 directory.

 

"Section Three" Boot from NFS share.

please refer to this document  on creating the image https://github.com/igorpecovnik/lib/blob/master/documentation/main-05-fel-boot.md - there will be an additional HOWTO & amendment to this HOWTO to cover this topic in more detail.

Share this post


Link to post
Share on other sites
(edited)

Then open the /etc/exports file using the following command:

nano /etc/exports

Minor detail - let user pick their favorite editor, "edit file /etc/exports" should be simple enough.

 

insecure – Does not mean the files are insecure but rather this setting allow clients (eg. Mac OS X) to use non-reserved ports connect to a NFS server.

They are insecure anyway since you don't configure authentication or access limiting  :)

 

fsid=0 - declares this to be a v4 only share

       fsid=num|root|uuid
              NFS  needs  to  be  able  to  identify  each  filesystem that it
              exports.  Normally it will use a UUID for the filesystem (if the
              filesystem  has such a thing) or the device number of the device
              holding the filesystem (if  the  filesystem  is  stored  on  the
              device).

              As  not  all  filesystems  are  stored  on  devices, and not all
              filesystems have UUIDs, it is sometimes necessary to  explicitly
              tell  NFS  how  to identify a filesystem.  This is done with the
              fsid= option.

              For NFSv4, there is a distinguished filesystem which is the root
              of all exported filesystem.  This is specified with fsid=root or
              fsid=0 both of which mean exactly the same thing.

              Other filesystems can be identified with a small integer,  or  a
              UUID  which  should contain 32 hex digits and arbitrary punctuaâ€
              tion.

              Linux kernels version 2.6.20 and earlier do not  understand  the
              UUID  setting  so a small integer must be used if an fsid option
              needs to be set for such kernels.  Setting both a  small  number
              and a UUID is supported so the same configuration can be made to
              work on old and new kernels alike.

TL;DR from my experience - explicit fsid parameter is needed when underlying filesystem doesn't support UUID (i.e. tmpfs)

Edit: I mean for non fsid=0/fsid=root shares.

 

nohide - makes it visible

       nohide This option is based on the option of the same name provided  in
              IRIX  NFS.  Normally, if a server exports two filesystems one of
              which is mounted on the other, then  the  client  will  have  to
              mount  both filesystems explicitly to get access to them.  If it
              just mounts the parent, it will see an empty  directory  at  the
              place where the other filesystem is mounted.  That filesystem is
              "hidden".

              The  nohide  option  is  currently only effective on single host
              exports.  It does not work reliably with  netgroup,  subnet,  or
              wildcard exports.

no_root_squash - *enables* write access for the connecting device root use on a NFS share

   User ID Mapping
       nfsd bases its access control to files on the server machine on the uid
       and  gid  provided  in each NFS RPC request. The normal behavior a user
       would expect is that she can access her files on the server just as she
       would  on  a  normal  file system. This requires that the same uids and
       gids are used on the client and the server machine. This is not  always
       true, nor is it always desirable.

       Very  often, it is not desirable that the root user on a client machine
       is also treated as root when accessing files on the NFS server. To this
       end,  uid  0 is normally mapped to a different id: the so-called anonyâ€
       mous or nobody uid. This mode of operation (called `root squashing') is
       the default, and can be turned off with no_root_squash.

       By  default,  exportfs  chooses  a  uid  and  gid of 65534 for squashed
       access. These values can also be overridden by the anonuid and  anongid
       options.   Finally,  you can map all user requests to the anonymous uid
       by specifying the all_squash option.

       Here's the complete list of mapping options:

       root_squash
              Map requests from uid/gid 0 to the anonymous uid/gid. Note  that
              this  does  not  apply  to  any other uids or gids that might be
              equally sensitive, such as user bin or group staff.

       no_root_squash
              Turn off root squashing. This option is mainly useful for  diskâ€
              less clients.

Starting / Stopping / Restarting NFS

Once you have setup NFS server, you can start NFS share using the following command as root

exportfs -ra 
       -a     Export or unexport all directories.


       -r     Reexport all directories, synchronizing  /var/lib/nfs/etab  with
              /etc/exports   and  files  under  /etc/exports.d.   This  option
              removes entries in /var/lib/nfs/etab  which  have  been  deleted
              from /etc/exports or files under /etc/exports.d, and removes any
              entries from the kernel export table which are no longer valid.
 

"exportfs -ra" is enough to apply any changes made in /etc/exports

Restarting nfs-kernel-server may be needed to apply changes made in /etc/default/nfs-kernel-server or to disconnect already connected clients

 

"Section Three" Boot from NFS share.

please refer to this document : https://github.com/i...-05-fel-boot.md

Not exactly. This is not documented well, but I meant building image with "ROOTFS_TYPE=nfs" option, which creates small SD card image and rootfs archive. Edit boot script on SD, deploy and export rootfs on a server and you are done.

Edited by zador.blood.stained
Formatting...

Share this post


Link to post
Share on other sites

Hi Gravelrash,

 

I prefer a defined place for server shares. So I have already done within my automount&share script.

So imho a good place would be /srv/nfs for nfs shares (/srv/smb for samba etc.)

 

Disadvantages: You need an additional mount within this directory, but this is quick done by: mount --bind or mount --rbind

 

Advanatges: You are more flexible and see what you have shared, permissions could be set too. Additional mounts could on the fly be shared by binding they in the parent share (valid at least for NFSv4 and samba).

 

As I already mentioned, this is how my automount&share script is working, but I think, you was the only one who was interested in ;)

 

To you exports file

 

the following will allow both v3 and v4 clients to access the same files

# Export media to allow access for v4 clients ONLY
/media/Disk1 *(rw,async,insecure,no_root_squash,no_subtree_check,fsid=0,nohide)

# Export media to allow access for v3 clients
/media/Disk1/Data *(rw,sync,insecure,no_root_squash,no_subtree_check,nohide)

 

It is correct, for NFSv4 you need a root directory, defined by fsid=0. And there could be only one. So if you have Disk2, it can't be shared additional in this way. Or you mount Disk2 within Disk1, not the best idea ;) That's why, my suggested solution is the better one.

And NFSv3 clients could access the NFSv4 server share too, no need to share it twice.

 

Remark: Some old NFSv3 clients have problems with NFS shares and file system boundaries (e.g. the mentioned Disk2 mounting within the mounted Disk1). These problems could sometimes be resolved by giving the correct share options (nohide etc.) and an unique fsid and a separate sharing entry in the exports file. Also be done (beside the fsid, but there is something within my mind) with my script  ;) 

 

But never-less, for some clients I would prefer a different sharing solution, if the shares are changing often. So I use samba with my mini server and kodi and GBit Lan.

 

By the way, you could write down some advantages for NFS (lower overhead and so more speed on slower networks, better ACL handling, better usable on linux clients, easier sharing if you have permanent ip addresses).

 

Regards sysitos

Share this post


Link to post
Share on other sites

thanks for you feedback and comments - i will expand and modify the HOWTo with your feedbacks. the request for feedback is to make it more specific to the way people on here use NFS. whilst tryong to keep it as simple as possitble so anyone coming across this for the first time could also use it.

Share this post


Link to post
Share on other sites

@zador - amended to include your comments

 

@sysitos - I will have a look into your comments and see how i can amend the above to reflect them and also to include your automount script. this may be another HOWTO linked to this one. I like your idea and approach.

Share this post


Link to post
Share on other sites

Where is the final destination for the HOWTOs?

 

Since this re-organization of the github vs forum, I can't find the discussion streams about the claimed tasks.

Share this post


Link to post
Share on other sites
apt-get update ; apt-get upgrade ; apt-get install autofs nfs-kernel-server nfs-common --install-recomends -f -y ;

 

It is 

--install-suggests

And there is another typo:

exportfs -ra

Share this post


Link to post
Share on other sites

Sorry, but --install-recomends does not work for me on Armbian.

And as I discovered exportfs should be used instead of export

 

Thanks for great tutorial otherwise!

Share this post


Link to post
Share on other sites
Guest
This topic is now closed to further replies.
4 4