Gravelrash

Members
  • Content Count

    89
  • Joined

  • Last visited


Reputation Activity

  1. Like
    Gravelrash got a reaction from BreadLee in TFTPboot using U-Boot and WiFi Orange Pi   
  2. Like
    Gravelrash reacted to Igor in Armbian Bionic desktop on Cubieboard 1   
    Yes. Cubieboard 1  I am doing various testings on OS level to find bugs related to last changes and I wanted to see how current latest Armbian assembly works on an older, officially unsupported, board, which we usually don't use any more. Especially not for a desktop replacement. It's more like a joke but let's look how a single core single board computer from 2012 actually performs:

    Armbianmonitor: http://ix.io/18QM
     
    Booting full featured XFCE desktop: (real-time)
     
     
    Writing and printing a document  (slow typing due to crippled keyboard)
     

    Chromium works .. but its very laggy, barely usable.
     
    Wireless network works superbly, roughly 50% faster than wired:
     

    Boot media performance (8G C1 eMMC with SD card adaptor):
     

    Installed packages:
     

    Download:
    https://dl.armbian.com/cubieboard/nightly/
  3. Like
    Gravelrash reacted to Larry Bank in HX1230 LCD (96x68 monochrome)   
    I just received a couple of these inexpensive LCD displays ($1.82 each shipped). They are very low power and look good. I've got them working on Arduino and will now be converting the code to Linux. If anyone is interested in these, let me know. I'll edit this post to add the github link when it's ready.
     

  4. Like
    Gravelrash reacted to Soichi in [solved] Can not output HDMI to the monitor   
    The problem is solved. As it turned out, it was in a cheap HDMI cable bought in Leroy Merlin. It was enough to display the image on the TV, but the monitor did not show anything. I found a different cable, connected it and it all worked.
    I am sorry to bother you.
  5. Like
    Gravelrash got a reaction from wildcat_paris in HOWTO : NFS Server and Client Configuration   
    We will be configuring in a mode that allows both NFSv3 and NFSv4 clients to connect to access to the same share.
     
    Q. What are the benefits of an NFS share over a SAMBA share?
    A. This would take an article in itself! But as a brief reasoning, In a closed network (where you know every device), NFS is a fine choice. With a good network, throughput it disgustingly fast and at the same time less CPU intensive on the server. There are a lot of applications that support NFS "Out of the box" KODI being a common example. It's very simple to set up and you can toggle readonly on shares you don't need to be writeable. It only gets complicated if you want to start layering on security through LDAP/gssd. It's capable of very complex and complete security mechanisms... But you don't need them in this scenario.
     
    Q.   Why are we doing this and not limiting to NFSv4?
    A.   Flexibility, this will allow almost any device you have that supports reading and mounting NFS shares to connect and will also allow the share to be used to boot your clients from which will allow fast testing of new kernels etc.
     
    Q. Why would I want to do this?
    A. You can boot your dev boards from the NFS share to allow you to test new kernels quickly and simply
    A. You can map your shared locations in Multimedia clients quickly and easily.
    A. Your friends like to struggle with SAMBA so you can stay native and up your "geek cred"
     
    This HOWTO: will be split into distinct areas,
         "Section One" Install and Configure the server
         "Section Two" Configure client access.
         "Section Three" Boot from NFS share. (a separate document in its own right that will be constructed shortly).
     
     
    "Section One" Install and Configure the server

    Install NFS Server and Client
    apt-get update ; apt-get upgrade ; apt-get install autofs nfs-kernel-server nfs-common --install-recommends -f -y ; Now reboot
    sync ; reboot ; I will be making the following assumptions (just amend to suit your environment)
    1. You already have a working system!
    2. Your media to be shared is mounted via fstab by its label, in this case Disk1
    3. The mounted disk resides in the following folder /media/Disk1
    4. this mount has a folder inside called /Data

    Configure NFS Server (as root)
    cp -a /etc/exports /etc/exports.backup
    Then open and edit the /etc/exports file using your favourite editing tool:
    Edit and comment out ANY and ALL existing lines by adding a “#†in front the line.
    Setup NFS share for each mounted drive/folder you want to make available to the client devices
    the following will allow both v3 and v4 clients to access the same files
    # Export media to allow access for v3 and v4 clients /media/Disk1/Data *(rw,sync,insecure,no_root_squash,no_subtree_check,nohide) Explanation of above chosen parameters
    rw – allows read/write if you want to be able to delete or rename file
    async – Increases read/write performance. Only to be used for non-critical files.
    insecure – This setting allow clients (eg. Mac OS X) to use non-reserved ports to connect to the NFS server.
    no_subtree_check – Improves speed and reliability by eliminating permission checks on parent directories.
    nohide - makes it visible
    no_root_squash - *enables* write access for the connecting device root use on a NFS share
     
    Further explanations if you so desire can be found in the man page for NFS or from the following link
    http://linux.die.net/man/5/exports

    Starting / Stopping / Restarting NFS
    Once you have setup NFS server, you can Start / Stop / Restart NFS using the following command as root
    # Restart NFS server service nfs-kernel-server restart # Stop NFS server service nfs-kernel-server stop # Start NFS server # needed to disconnect already connected clients after changes have been made service nfs-kernel-server start Any time you make changes to /etc/exports in this scenario it is advisable to re export your shares using the following command as root
    export -ra Ok now we have the shares setup and accessible, we can start to use this for our full fat linux/mac clients and or configure our other "dev boards"  to boot from the NFS share(s).


    "Section Two" Configure client access.
     
    We now need to interrogate our NFS server to see what is being exported, use the command showmount and the servers ip address
    showmount -e "192.168.0.100" Export list for 192.168.0.100: /mnt/Disk1 * In this example it shows that the path we use to mount or share is "/mnt/Disk1" (and any folder below that is accessible to us)
    NFS Client Configuration v4 - NFSv4 clients must connect with relative address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=4 192.168.xxx.xxx:/ /home/â€your user nameâ€/nfs4 The mount point directory /home/â€your user nameâ€/nfs4 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs4 directory.
     
    Another way to mount an NFS share from your *server is to add a line to the /etc/fstab file. The basic needed is as below
    192.168.xxx.xxx:/    /home/â€your user nameâ€/nfs4    nfs    auto    0    0 NFS Client Configuration v3 - NFSv3 clients must use full address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=3 192.168.xxx.xxx:/media/Disk1/Data /home/â€your user nameâ€/nfs3 The mount point directory /home/â€your user nameâ€/nfs3 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs3 directory.
     
    "Section Three" Boot from NFS share.
    please refer to this document  on creating the image https://github.com/igorpecovnik/lib/blob/master/documentation/main-05-fel-boot.md - there will be an additional HOWTO & amendment to this HOWTO to cover this topic in more detail.
  6. Like
    Gravelrash got a reaction from wildcat_paris in HOWTO : NFS Server and Client Configuration   
    We will be configuring in a mode that allows both NFSv3 and NFSv4 clients to connect to access to the same share.
     
    Q. What are the benefits of an NFS share over a SAMBA share?
    A. This would take an article in itself! But as a brief reasoning, In a closed network (where you know every device), NFS is a fine choice. With a good network, throughput it disgustingly fast and at the same time less CPU intensive on the server. There are a lot of applications that support NFS "Out of the box" KODI being a common example. It's very simple to set up and you can toggle readonly on shares you don't need to be writeable. It only gets complicated if you want to start layering on security through LDAP/gssd. It's capable of very complex and complete security mechanisms... But you don't need them in this scenario.
     
    Q.   Why are we doing this and not limiting to NFSv4?
    A.   Flexibility, this will allow almost any device you have that supports reading and mounting NFS shares to connect and will also allow the share to be used to boot your clients from which will allow fast testing of new kernels etc.
     
    Q. Why would I want to do this?
    A. You can boot your dev boards from the NFS share to allow you to test new kernels quickly and simply
    A. You can map your shared locations in Multimedia clients quickly and easily.
    A. Your friends like to struggle with SAMBA so you can stay native and up your "geek cred"
     
    This HOWTO: will be split into distinct areas,
         "Section One" Install and Configure the server
         "Section Two" Configure client access.
         "Section Three" Boot from NFS share. (a separate document in its own right that will be constructed shortly).
     
     
    "Section One" Install and Configure the server

    Install NFS Server and Client
    apt-get update ; apt-get upgrade ; apt-get install autofs nfs-kernel-server nfs-common --install-recommends -f -y ; Now reboot
    sync ; reboot ; I will be making the following assumptions (just amend to suit your environment)
    1. You already have a working system!
    2. Your media to be shared is mounted via fstab by its label, in this case Disk1
    3. The mounted disk resides in the following folder /media/Disk1
    4. this mount has a folder inside called /Data

    Configure NFS Server (as root)
    cp -a /etc/exports /etc/exports.backup
    Then open and edit the /etc/exports file using your favourite editing tool:
    Edit and comment out ANY and ALL existing lines by adding a “#†in front the line.
    Setup NFS share for each mounted drive/folder you want to make available to the client devices
    the following will allow both v3 and v4 clients to access the same files
    # Export media to allow access for v3 and v4 clients /media/Disk1/Data *(rw,sync,insecure,no_root_squash,no_subtree_check,nohide) Explanation of above chosen parameters
    rw – allows read/write if you want to be able to delete or rename file
    async – Increases read/write performance. Only to be used for non-critical files.
    insecure – This setting allow clients (eg. Mac OS X) to use non-reserved ports to connect to the NFS server.
    no_subtree_check – Improves speed and reliability by eliminating permission checks on parent directories.
    nohide - makes it visible
    no_root_squash - *enables* write access for the connecting device root use on a NFS share
     
    Further explanations if you so desire can be found in the man page for NFS or from the following link
    http://linux.die.net/man/5/exports

    Starting / Stopping / Restarting NFS
    Once you have setup NFS server, you can Start / Stop / Restart NFS using the following command as root
    # Restart NFS server service nfs-kernel-server restart # Stop NFS server service nfs-kernel-server stop # Start NFS server # needed to disconnect already connected clients after changes have been made service nfs-kernel-server start Any time you make changes to /etc/exports in this scenario it is advisable to re export your shares using the following command as root
    export -ra Ok now we have the shares setup and accessible, we can start to use this for our full fat linux/mac clients and or configure our other "dev boards"  to boot from the NFS share(s).


    "Section Two" Configure client access.
     
    We now need to interrogate our NFS server to see what is being exported, use the command showmount and the servers ip address
    showmount -e "192.168.0.100" Export list for 192.168.0.100: /mnt/Disk1 * In this example it shows that the path we use to mount or share is "/mnt/Disk1" (and any folder below that is accessible to us)
    NFS Client Configuration v4 - NFSv4 clients must connect with relative address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=4 192.168.xxx.xxx:/ /home/â€your user nameâ€/nfs4 The mount point directory /home/â€your user nameâ€/nfs4 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs4 directory.
     
    Another way to mount an NFS share from your *server is to add a line to the /etc/fstab file. The basic needed is as below
    192.168.xxx.xxx:/    /home/â€your user nameâ€/nfs4    nfs    auto    0    0 NFS Client Configuration v3 - NFSv3 clients must use full address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=3 192.168.xxx.xxx:/media/Disk1/Data /home/â€your user nameâ€/nfs3 The mount point directory /home/â€your user nameâ€/nfs3 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs3 directory.
     
    "Section Three" Boot from NFS share.
    please refer to this document  on creating the image https://github.com/igorpecovnik/lib/blob/master/documentation/main-05-fel-boot.md - there will be an additional HOWTO & amendment to this HOWTO to cover this topic in more detail.
  7. Like
    Gravelrash got a reaction from wildcat_paris in who is TJoe? A soft voice & much more...   
    i have listened to the 'cast. promises to be a good series and a decent interviewer to boot
  8. Like
    Gravelrash reacted to @lex in NanoPi M2 - Any interest?   
    Hi @Gravelrash,
     
    FYI, i received the board today, everything is OK and i hope to come up with something useful for the platform.
     
    Thank You.
  9. Like
    Gravelrash got a reaction from @lex in [399] - Prepare HOWTOs and package armbian-gc2035-fswebcam package   
    task : armbian-gc2035-fswebcam package script posted on GITHUB and merge request issued.
     
    https://github.com/gravelrash/lib/blob/master/extras/fswebcam.sh
     
    Pending acceptance / rejection / comments.
  10. Like
    Gravelrash got a reaction from @lex in [399] - Prepare HOWTOs and package armbian-gc2035-fswebcam package   
    task : armbian-gc2035-fswebcam package script posted on GITHUB and merge request issued.
     
    https://github.com/gravelrash/lib/blob/master/extras/fswebcam.sh
     
    Pending acceptance / rejection / comments.
  11. Like
    Gravelrash got a reaction from @lex in [399] - Prepare HOWTOs and package armbian-gc2035-fswebcam package   
    task : armbian-gc2035-fswebcam package script posted on GITHUB and merge request issued.
     
    https://github.com/gravelrash/lib/blob/master/extras/fswebcam.sh
     
    Pending acceptance / rejection / comments.
  12. Like
    Gravelrash reacted to @lex in Guvcview for Pine64+   
    I just got the Guvcview working on the Pine64+ with a 5MP camera.
     
    Have a look here: https://drive.google.com/open?id=0B7A7OPBC-aN7cFlBeEpnM3FIeE0
     
    Some consideration:
     
    640x480 -> 30 FPS
    1280x720 -> 18 FPS
     
     
     
  13. Like
    Gravelrash got a reaction from wildcat_paris in NanoPi M2 - Any interest?   
    best i can find is this - should keep me out of mischief on when the weather turns nasty
     
    http://wiki.friendlyarm.com/wiki/index.php/NanoPi_M2
     
    i will load it up - stick a mahooooosive heatsink on it with a fan and use it as my apt-cacher-ng unit
  14. Like
    Gravelrash got a reaction from tkaiser in NanoPi M2 - Any interest?   
    donating to @lex
  15. Like
    Gravelrash got a reaction from wildcat_paris in NanoPi M2 - Any interest?   
    @wildcat_paris : 
    Thats probably why i was given it.... It can go and live on the shelf along with the Remix Mini.....
  16. Like
    Gravelrash got a reaction from tkaiser in NanoPi M2 - Any interest?   
    donating to @lex
  17. Like
    Gravelrash reacted to jernej in H3 kernel repo   
    Thanks for at least some encouragement While you (tkaiser) are absolutely focused on headless systems, others might be on desktop/graphics. So at least for them this will be interesting for a while. I will join you with maintaining desktop version shortly. If I want to make Kodi working with libvdpau-sunxi, Armbian is much nicer platform to develop on and I can help you with squashing some bugs in the meantime. Currently I'm a bit short on time, but I will contribute for sure.
  18. Like
    Gravelrash reacted to tkaiser in Increased memory consumption   
    Quite the opposite, the link explains pretty much what's wrong when you're talking about a 'memory problem'. On a linux server you have a 'memory problem' if the amount of free memory exceeds a few MB since that means the kernel sucks using all available RAM for things like caches/buffers. Having 'free memory' should only happen directly after a reboot.
     
    We're not talking about Windows 3.11 or MacOS 7 here -- we're talking about Unix/Linux which makes use of a virtual memory implementation always trying to use as much physical available DRAM as possible. Having 'free memory' after days of usage is almost an indication for a failure (of the VM implementation in question -- since Armbian's 'legacy kernel' for H3 devices is a smelly Android kernel from Allwinner in reality such failures might happen).
  19. Like
    Gravelrash got a reaction from wildcat_paris in Raspberry Vs Orange Pi   
    i made my own cable from an old usb cable and a power connector of the correct size.
     
    i can confirm that:
    a. with a good quality 2a psu i get no power or stability issues
    b. with a crappy 2a psu - its unpredictable
    c. With a good quality 1a psu its works fine.
  20. Like
    Gravelrash reacted to tkaiser in [Documentation] software proposal for Armbian wiki   
    So you're talking about the order of horribly outdated information instead of the information itself (I thought we're talking about improving documentation? That means that every of these sentences has to be either deleted or reworked since this is outdated stuff back from the day Igor supported 5 boards and now we're dealing with +40)
     
    Then what do you suggest? Anyone should change something that comes to your mind right now? Don't we have a wiki now? Shouldn't we talk about the processes how to alter the wiki's contents (who is allowed to do what? How can we integrate people not allowed to edit directly to contribute?) instead of polluting this thread with random nonsense like "someone should add '&& sync' somewhere"?
     
    I stop here.
  21. Like
    Gravelrash got a reaction from wildcat_paris in Raspberry Vs Orange Pi   
    i made my own cable from an old usb cable and a power connector of the correct size.
     
    i can confirm that:
    a. with a good quality 2a psu i get no power or stability issues
    b. with a crappy 2a psu - its unpredictable
    c. With a good quality 1a psu its works fine.
  22. Like
    Gravelrash got a reaction from wildcat_paris in Claim a task, set DUE date and start doing it!   
    Just a note to confirm i received my board yesterday morning.
  23. Like
    Gravelrash reacted to sysitos in HOWTO : NFS Server and Client Configuration   
    Hi Gravelrash,
     
    I prefer a defined place for server shares. So I have already done within my automount&share script.
    So imho a good place would be /srv/nfs for nfs shares (/srv/smb for samba etc.)
     
    Disadvantages: You need an additional mount within this directory, but this is quick done by: mount --bind or mount --rbind
     
    Advanatges: You are more flexible and see what you have shared, permissions could be set too. Additional mounts could on the fly be shared by binding they in the parent share (valid at least for NFSv4 and samba).
     
    As I already mentioned, this is how my automount&share script is working, but I think, you was the only one who was interested in
     
    To you exports file
     
    It is correct, for NFSv4 you need a root directory, defined by fsid=0. And there could be only one. So if you have Disk2, it can't be shared additional in this way. Or you mount Disk2 within Disk1, not the best idea That's why, my suggested solution is the better one.
    And NFSv3 clients could access the NFSv4 server share too, no need to share it twice.
     
    Remark: Some old NFSv3 clients have problems with NFS shares and file system boundaries (e.g. the mentioned Disk2 mounting within the mounted Disk1). These problems could sometimes be resolved by giving the correct share options (nohide etc.) and an unique fsid and a separate sharing entry in the exports file. Also be done (beside the fsid, but there is something within my mind) with my script   
     
    But never-less, for some clients I would prefer a different sharing solution, if the shares are changing often. So I use samba with my mini server and kodi and GBit Lan.
     
    By the way, you could write down some advantages for NFS (lower overhead and so more speed on slower networks, better ACL handling, better usable on linux clients, easier sharing if you have permanent ip addresses).
     
    Regards sysitos
  24. Like
    Gravelrash reacted to zador.blood.stained in HOWTO : NFS Server and Client Configuration   
    Minor detail - let user pick their favorite editor, "edit file /etc/exports" should be simple enough.
     
    They are insecure anyway since you don't configure authentication or access limiting 
     
    fsid=num|root|uuid NFS needs to be able to identify each filesystem that it exports. Normally it will use a UUID for the filesystem (if the filesystem has such a thing) or the device number of the device holding the filesystem (if the filesystem is stored on the device). As not all filesystems are stored on devices, and not all filesystems have UUIDs, it is sometimes necessary to explicitly tell NFS how to identify a filesystem. This is done with the fsid= option. For NFSv4, there is a distinguished filesystem which is the root of all exported filesystem. This is specified with fsid=root or fsid=0 both of which mean exactly the same thing. Other filesystems can be identified with a small integer, or a UUID which should contain 32 hex digits and arbitrary punctua†tion. Linux kernels version 2.6.20 and earlier do not understand the UUID setting so a small integer must be used if an fsid option needs to be set for such kernels. Setting both a small number and a UUID is supported so the same configuration can be made to work on old and new kernels alike. TL;DR from my experience - explicit fsid parameter is needed when underlying filesystem doesn't support UUID (i.e. tmpfs)
    Edit: I mean for non fsid=0/fsid=root shares.
     
           nohide This option is based on the option of the same name provided  in               IRIX  NFS.  Normally, if a server exports two filesystems one of               which is mounted on the other, then  the  client  will  have  to               mount  both filesystems explicitly to get access to them.  If it               just mounts the parent, it will see an empty  directory  at  the               place where the other filesystem is mounted.  That filesystem is               "hidden".               The  nohide  option  is  currently only effective on single host               exports.  It does not work reliably with  netgroup,  subnet,  or               wildcard exports. User ID Mapping nfsd bases its access control to files on the server machine on the uid and gid provided in each NFS RPC request. The normal behavior a user would expect is that she can access her files on the server just as she would on a normal file system. This requires that the same uids and gids are used on the client and the server machine. This is not always true, nor is it always desirable. Very often, it is not desirable that the root user on a client machine is also treated as root when accessing files on the NFS server. To this end, uid 0 is normally mapped to a different id: the so-called anony†mous or nobody uid. This mode of operation (called `root squashing') is the default, and can be turned off with no_root_squash. By default, exportfs chooses a uid and gid of 65534 for squashed access. These values can also be overridden by the anonuid and anongid options. Finally, you can map all user requests to the anonymous uid by specifying the all_squash option. Here's the complete list of mapping options: root_squash Map requests from uid/gid 0 to the anonymous uid/gid. Note that this does not apply to any other uids or gids that might be equally sensitive, such as user bin or group staff. no_root_squash Turn off root squashing. This option is mainly useful for disk†less clients.        -a     Export or unexport all directories.        -r     Reexport all directories, synchronizing  /var/lib/nfs/etab  with               /etc/exports   and  files  under  /etc/exports.d.   This  option               removes entries in /var/lib/nfs/etab  which  have  been  deleted               from /etc/exports or files under /etc/exports.d, and removes any               entries from the kernel export table which are no longer valid.   "exportfs -ra" is enough to apply any changes made in /etc/exports
    Restarting nfs-kernel-server may be needed to apply changes made in /etc/default/nfs-kernel-server or to disconnect already connected clients
     
    Not exactly. This is not documented well, but I meant building image with "ROOTFS_TYPE=nfs" option, which creates small SD card image and rootfs archive. Edit boot script on SD, deploy and export rootfs on a server and you are done.
  25. Like
    Gravelrash got a reaction from wildcat_paris in HOWTO : NFS Server and Client Configuration   
    We will be configuring in a mode that allows both NFSv3 and NFSv4 clients to connect to access to the same share.
     
    Q. What are the benefits of an NFS share over a SAMBA share?
    A. This would take an article in itself! But as a brief reasoning, In a closed network (where you know every device), NFS is a fine choice. With a good network, throughput it disgustingly fast and at the same time less CPU intensive on the server. There are a lot of applications that support NFS "Out of the box" KODI being a common example. It's very simple to set up and you can toggle readonly on shares you don't need to be writeable. It only gets complicated if you want to start layering on security through LDAP/gssd. It's capable of very complex and complete security mechanisms... But you don't need them in this scenario.
     
    Q.   Why are we doing this and not limiting to NFSv4?
    A.   Flexibility, this will allow almost any device you have that supports reading and mounting NFS shares to connect and will also allow the share to be used to boot your clients from which will allow fast testing of new kernels etc.
     
    Q. Why would I want to do this?
    A. You can boot your dev boards from the NFS share to allow you to test new kernels quickly and simply
    A. You can map your shared locations in Multimedia clients quickly and easily.
    A. Your friends like to struggle with SAMBA so you can stay native and up your "geek cred"
     
    This HOWTO: will be split into distinct areas,
         "Section One" Install and Configure the server
         "Section Two" Configure client access.
         "Section Three" Boot from NFS share. (a separate document in its own right that will be constructed shortly).
     
     
    "Section One" Install and Configure the server

    Install NFS Server and Client
    apt-get update ; apt-get upgrade ; apt-get install autofs nfs-kernel-server nfs-common --install-recommends -f -y ; Now reboot
    sync ; reboot ; I will be making the following assumptions (just amend to suit your environment)
    1. You already have a working system!
    2. Your media to be shared is mounted via fstab by its label, in this case Disk1
    3. The mounted disk resides in the following folder /media/Disk1
    4. this mount has a folder inside called /Data

    Configure NFS Server (as root)
    cp -a /etc/exports /etc/exports.backup
    Then open and edit the /etc/exports file using your favourite editing tool:
    Edit and comment out ANY and ALL existing lines by adding a “#†in front the line.
    Setup NFS share for each mounted drive/folder you want to make available to the client devices
    the following will allow both v3 and v4 clients to access the same files
    # Export media to allow access for v3 and v4 clients /media/Disk1/Data *(rw,sync,insecure,no_root_squash,no_subtree_check,nohide) Explanation of above chosen parameters
    rw – allows read/write if you want to be able to delete or rename file
    async – Increases read/write performance. Only to be used for non-critical files.
    insecure – This setting allow clients (eg. Mac OS X) to use non-reserved ports to connect to the NFS server.
    no_subtree_check – Improves speed and reliability by eliminating permission checks on parent directories.
    nohide - makes it visible
    no_root_squash - *enables* write access for the connecting device root use on a NFS share
     
    Further explanations if you so desire can be found in the man page for NFS or from the following link
    http://linux.die.net/man/5/exports

    Starting / Stopping / Restarting NFS
    Once you have setup NFS server, you can Start / Stop / Restart NFS using the following command as root
    # Restart NFS server service nfs-kernel-server restart # Stop NFS server service nfs-kernel-server stop # Start NFS server # needed to disconnect already connected clients after changes have been made service nfs-kernel-server start Any time you make changes to /etc/exports in this scenario it is advisable to re export your shares using the following command as root
    export -ra Ok now we have the shares setup and accessible, we can start to use this for our full fat linux/mac clients and or configure our other "dev boards"  to boot from the NFS share(s).


    "Section Two" Configure client access.
     
    We now need to interrogate our NFS server to see what is being exported, use the command showmount and the servers ip address
    showmount -e "192.168.0.100" Export list for 192.168.0.100: /mnt/Disk1 * In this example it shows that the path we use to mount or share is "/mnt/Disk1" (and any folder below that is accessible to us)
    NFS Client Configuration v4 - NFSv4 clients must connect with relative address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=4 192.168.xxx.xxx:/ /home/â€your user nameâ€/nfs4 The mount point directory /home/â€your user nameâ€/nfs4 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs4 directory.
     
    Another way to mount an NFS share from your *server is to add a line to the /etc/fstab file. The basic needed is as below
    192.168.xxx.xxx:/    /home/â€your user nameâ€/nfs4    nfs    auto    0    0 NFS Client Configuration v3 - NFSv3 clients must use full address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=3 192.168.xxx.xxx:/media/Disk1/Data /home/â€your user nameâ€/nfs3 The mount point directory /home/â€your user nameâ€/nfs3 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs3 directory.
     
    "Section Three" Boot from NFS share.
    please refer to this document  on creating the image https://github.com/igorpecovnik/lib/blob/master/documentation/main-05-fel-boot.md - there will be an additional HOWTO & amendment to this HOWTO to cover this topic in more detail.