lomady

Members
  • Content Count

    25
  • Joined

  • Last visited


Reputation Activity

  1. Like
    lomady reacted to Igor in Summer update. Bust.er4all boards   
    v5.90 / 7.7.2019
    All images were updated. It mainly goes for a bugfix update, cleanup, adding Debian Buster release. Beware that software maturity with Buster is not just there yet (even it was declared stable) and with some applications (like OMV) you will encounter problems. Kernel/BSP wise, Debian Buster is the same, uses Armbian kernel, as our other builds. Most of the builds were briefly tested, but bugs might be hiding somewhere. This is the best what is out there thanks to greater Debian community, those around boards and of course ours which pushes generic Debian/Linux to the sky . Enjoy the summer time.
     
    What's new? -> https://docs.armbian.com/Release_Changelog/
  2. Like
    lomady reacted to Werner in Obtaining Chip   
    Which boards?
  3. Like
    lomady reacted to ImmortanJoe in h6 computer fast enough to be a notebook?   
    As always the question is what you want to do with it
     
    I'm typing this on a Pinebook which is fast enough for what I use it for. My Orange Pi Zero is fast enough for file and network services that I use it for. My RPi Zero is fast enough for running RISC OS. It's all down to your use case.
  4. Like
    lomady reacted to Tido in Orangepi 3 h6 allwiner chip   
    Not  armbian,  of LINUX. 
    The implementation of the PCI Express controller of H6 is broken, you find more information here and here.
    (the above text is from Pine64 H6  https://www.armbian.com/pine-h64/ )
     
    Well, at least it is powered via barrel jack == one problem less 
  5. Like
    lomady reacted to turkerali in SOLVED: How to add an external RTC module to ROCK64 board   
    ROCK64 is a RK3328 Quad-Core ARM Cortex A53 board with up to 4 GB of RAM. Unfortunately it has a non-functional RTC (/dev/rtc0), which is not battery backed. If your board is not connected to internet 7/24, you can not use the NTP or systemd-timesyncd. Therefore you need to connect a battery-backed external RTC module to the ROCK64 header, and get the time via the i2c protocol. The good news is, the GPIO headers on ROCK64 is compatible with Raspberry Pi headers. Therefore almost any Raspberry Pi RTC module will work on ROCK64. But you need to do some tweaking to communicate with the RTC module. I will explain the required steps which worked for me.
     
    1. Buy an external RTC module for Raspberry Pi, preferably with DS1307 chip. Here is an example:
    https://www.abelectronics.co.uk/p/70/rtc-pi
     
    2. Install i2c-tools package:
    sudo apt-get install i2c-tools  
    3. Connect the RTC module to the board as in the attached picture.
     
    4. Right now, you can not probe the RTC module, because most of the GPIO headers on ROCK64 board is disabled by default. (See https://github.com/ayufan-rock64/linux-build/blob/master/recipes/additional-devices.md for details.)
    Therefore if you try to probe i2c-0, you will get the error below:
    root@rock64:~# sudo i2cdetect -y 0 Error: Could not open file `/dev/i2c-0' or `/dev/i2c/0': No such file or directory Ayufan wrote a nice script named "enable_dtoverlay" to enable the GPIO headers on-the-fly. Here is the link: https://raw.githubusercontent.com/ayufan-rock64/linux-package/master/root/usr/local/sbin/enable_dtoverlay
    Download this script and copy it under /bin and make it executable. We will enable "i2c-0" with this script, which we need for the RTC module. The command to enable i2c-0 is as follows:
    /bin/enable_dtoverlay i2c0 i2c@ff150000 okay After you run this command, /dev/i2c-0 becomes available, and we can probe it via the command below:
     
    root@rock64:~# sudo i2cdetect -y 0 0 1 2 3 4 5 6 7 8 9 a b c d e f 00: -- -- -- -- -- -- -- -- -- -- -- -- -- 10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 60: -- -- -- -- -- -- -- -- 68 -- -- -- -- -- -- -- 70: -- -- -- -- -- -- -- -- If you see the number 68, you are on the right track.
     
    5. Now we need the kernel driver for DS1307. Unfortunately DS1307 driver is not available with armbian kernel (see below).
    root@rock64:~# cat /boot/config-4.4.162-rockchip64 | grep DS1307 # CONFIG_RTC_DRV_DS1307 is not set So we need to recompile the Armbian kernel with this option enabled. I will not cover the steps to compile the kernel, you can find it here: https://docs.armbian.com/Developer-Guide_Build-Preparation/
    Let's hope @Igor or any other Armbian developer will enable this module by default, and save us from this burden.
     
    After we compile the kernel with DS1307 driver, we are almost done. We need to tell the kernel that a DS1307 chip is using the i2c-0. Here is the way to do that:
    echo ds1307 0x68 > /sys/class/i2c-adapter/i2c-0/new_device After we execute this command, /dev/rtc1 becomes available, and points to our external RTC module. Please note that /dev/rtc0 is the onboard RTC, which does not work, and should be avoided.
     
    We need to update the date/time information from the system for the first time. Here is the command to do that:
    hwclock --rtc /dev/rtc1 --systohc Now our external RTC clock is set, and ready to take over NTP.
     
    6. Edit /lib/udev/hwclock-set. Find the lines below, and update as shown:
    if [ -e /run/systemd/system ] ; then # exit 0 /bin/enable_dtoverlay i2c0 i2c@ff150000 okay echo ds1307 0x68 > /sys/class/i2c-adapter/i2c-0/new_device fi  
    7. Edit /etc/rc.local, add the following lines to set the system clock for every boot:
     
    /lib/udev/hwclock-set /dev/rtc1  
    8. Disable the below services, as we don't need them anymore:
    systemctl stop systemd-timesyncd.service systemctl disable systemd-timesyncd.service systemctl stop fake-hwclock.service systemctl disable fake-hwclock.service  
    9. Reboot and enjoy your RTC module.

  6. Like
    lomady reacted to maxbl4 in H5 Orange Pi Prime WiFi Ap regression   
    Here is the output from main console.
    I got this with the latest Ubuntu based build.
    The errors appear once I try to connect to AP

  7. Like
    lomady reacted to maxbl4 in H5 Orange Pi Prime WiFi Ap regression   
    Downloaded 5.59 stretch image from archive. Clean install, no updates. AP works fine, no problems, no errors on console.
  8. Like
    lomady reacted to Lope in Change default Kernel options: TC (traffic control (shaping)) options available as modules/enabled   
    These boards are well suited to routing/network traffic control functionality.

    Currently the kernel has been compiled with Traffic Control stuff disabled, which makes it impossible to perform traffic shaping on stock Armbian kernel.

    I'm busy recompiling the kernel as a temporary solution, but I will really appreciate it if the kernel build maintainer can please change the traffic control options from DISABLED to MODULE (at least)?
     
    I have a bunch of Armbian boards but I'm testing with NanoPi Neo 2 right now.
     
    ---
     
    I'll let you know the size of the modules once I've finished compiling.
  9. Like
    lomady reacted to Werner in Change default Kernel options: TC (traffic control (shaping)) options available as modules/enabled   
    You can send a PR with your changes to the kernel config here: https://github.com/armbian/build
  10. Like
    lomady reacted to tkaiser in Orange Pi Zero NAS Expansion Board with SATA & mSATA   
    Little update on the NAS Expansion board: the first time I had this thing in my hands and gave it a try with mainline kernel I was surprised why the JMS578 chips on the board did not allow to use UAS. It looked always like this (lsusb -t):
    /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 480M With no other JMS578 device around I experienced this and since I'm currently testing with Orange Pi Zero Plus (still no UAS) and In the meantime JMicron provided us with a firmware update tool.... I thought I take the slightly higher JMS578 firmware revision from ROCK64 SATA cable (0.4.0.5) and update the NAS Expansion board (0.4.0.4) with:
    root@orangepizeroplus:~/JMS578FwUpdater# ./JMS578FwUpdate -d /dev/sda -v Bridge Firmware Version: v0.4.0.4 root@orangepizeroplus:~/JMS578FwUpdater# ./JMS578FwUpdate -d /dev/sdb -v Bridge Firmware Version: v0.4.0.5 root@orangepizeroplus:~/JMS578FwUpdater# ./JMS578FwUpdate -d /dev/sdb -b ./JMSv0.4.0.5.bin Backup Firmware file name: ./JMSv0.4.0.5.bin Backup the ROM code sucessfully. Open File Error!! root@orangepizeroplus:~/JMS578FwUpdater# ./JMS578FwUpdate -d /dev/sda -f ./JMSv0.4.0.5.bin -b ./JMSv0.4.0.4.bin Update Firmware file name: ./JMSv0.4.0.5.bin Backup Firmware file name: ./JMSv0.4.0.4.bin Backup the ROM code sucessfully. Programming & Compare Success!! Success!
    /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=uas, 480M Please keep in mind that you need to update both JMS578 of course. I won't upload the newer firmware yet since thanks to Pine's TL Lim I'm in direct contact to JMicron now and it's about fixing another JMS578 firmware issue.
  11. Like
    lomady reacted to Igor in Next LTS kernel 4.19.y Allwinner A10, A20, A64, H2+, H3, H5, H6 debugging party   
    ATF for building H6 boards is broken and building stops. Somebody needs to fix.
  12. Like
    lomady reacted to Igor in Next LTS kernel 4.19.y Allwinner A10, A20, A64, H2+, H3, H5, H6 debugging party   
    Mine hanged with current stock settings already at first boot ... then I tried orangepipc2 DT and ... it looks fine. I notice DT is missing regulator bits and after adding them ... at least that problem is gone. I did several reboots and cold starts. At one cold start it hanged ... Not sure if everything is fine yet.
     
    Possible related issue is wrong temperature. It's possible that protection kicks in? When idle it already shows 50° (33 shows my IR meter) and last time when I did stressing it reached critical temp of 92° which triggered controlled power off. This should not happen this soon. 
  13. Like
    lomady got a reaction from Igor in Next LTS kernel 4.19.y Allwinner A10, A20, A64, H2+, H3, H5, H6 debugging party   
    I just wanted to make a clean shutdown before pulling the power plug.
    The problem is that the Prime does not boot after I plug the power back in. As I said it freezes at various stages as seen in the boot log on the monitor connected via HDMI. I even saw a message "Unable to handle kernel paging request at address xxxxx" once.
  14. Like
    lomady reacted to Alastair in Bionic missing for Orange Pi Prime   
    Hi folks,
     
    A few months back, I set up an Orange Pi Prime on Armbian Bionic (Armbian_5.52.180715_Orangepiprime_Ubuntu_bionic_dev_4.18.0-rc4.7z). I'm now putting together a home automation workshop around this configuration, but it looks like it's now missing from the website (https://www.armbian.com/orange-pi-prime/) and download site (https://dl.armbian.com/orangepiprime/).
     
    Is Bionic no longer supported for this board, or was this an oversight?
     
    Is there something I can do to help?
  15. Like
    lomady got a reaction from Tido in Kickstarter: Allwinner VPU support in the official Linux kernel   
    Here is a great talk about this
     
  16. Like
    lomady reacted to ayufan in One (bsp) Kernel f RK3399/RK3328 and RK3288   
    I posted this comment: 
    that discribes a little my kernel
     
  17. Like
    lomady reacted to Igor in opi prime hangs   
    Please switch to nightly (armbian-config), update and repeat the tests. You should be running 4.17.4
  18. Like
    lomady got a reaction from Igor in opi prime hangs   
    Okay, my stress test runs stable for several hours now with 4.17.4 kernel.
    Thumbs up!
  19. Like
    lomady got a reaction from hrip6 in No network after upgrade to 5.30   
    FYI headless legacy kernel Orange Pi Plus 2E xenial and jessie updated fine.
    Thank you, developers! 
  20. Like
    lomady reacted to tkaiser in Some storage benchmarks on SBCs   
    Since I've seen some really weird disk/IO benchmarks made on SBCs the last days and both a new SBC and a new SSD arrived in the meantime I thought let's give it a try with a slightly better test setup.
      I tested with 4 different SoC/SBC: NanoPi M3 with an octa-core S5P6818 Samsung/Nexell SoC, ODROID-C2 featuring a quad-core Amlogic S905, Orange Pi PC with a quad-core Allwinner H3 and an old Banana Pi Pro with a dual-core A20. The device considered the slowest (dual-core A20 with just 960 MHz clockspeed) is the fastest in reality when it's about disk IO.   Since most if not all storage 'benchmarks' for SBC moronically focus on sequential transfer speeds only and completely forget that random IO is way more important on any SBC (it's not a digital camera or video recorder!) I tested this also. Since it's also somewhat moronically when you want to test the storage implementation of a computer to choose a disk that is limited the main test device is a brand new Samsung SSD 750 EVO 120GB I tested first on a PC whether the SSD is ok and to get a baseline what to expect.   Since NanoPi M3, ODROID-C2 and Orange Pi PC only feature USB 2.0 I tested with 2 different USB enclosures that are known to be USB Attached SCSI (UAS) capable. The nice thing with UAS is that while it's an optional USB feature that came together with USB 3.0 we can use it with more recent sunxi SoCs also when running mainline kernel (A20, H3, A64 -- all only USB 2.0 capable).   When clicking on the link you can also see how different USB enclosures (to be more precise: the included USB-to-SATA bridges used) perform. Keep that in mind when you see 'disk performance' numbers somewhere and people write SBC A would be 2MB/s faster than SBC B -- for the variation in numbers not only the SBC might be responsible but this is for sure also influenced by both the disk used and enclosure / USB-SATA bridge inside! The same applies to the kernel the SBC is running. So never trust in any numbers you find on the internet that are the results of tests at different times, with different disks or different enclosures. The numbers presented are just BS.   The two enclosures I tested with are equipped with JMicron JMS567 and ASMedia ASM1153. With sunxi SBCs running mainline kernel UAS will be used, with other SoCs/SBC or running legacy kernels it will be USB Mass Storage instead. Banana Pi Pro is an exception since its SoC features true SATA (with limited sequential write speeds) which will outperform every USB implementation. And I also used a rather fast SD card and also a normal HDD with this device connected to USB with a non UASP capable disk enclosure to show how badly this affects the important performance factors (again: random IO!)   I used iozone with 3 different runs: 1 MB test size with 1k, 2k and 4k record sizes 100 MB test size with 4k, 16k, 512k, 1024k and 16384k (16 MB) record sizes 4 GB test size with 4k and 1024k record sizes The variation in results is interesting. If 4K results between 1 and 100 MB test size differ you know that your benchmark is not testing disk througput but instead the (pretty small) disk cache. Using 4GB for sequential transfer speeds ensures that the whole amount of data exceeds DRAM size.   The results:   NanoPi M3 @ 1400 MHz / 3.4.39-s5p6818 / jessie / USB Mass Storage:   Sequential transfer speeds with USB: 30MB/s with 1MB record size and just 7.5MB/s at 4K/100MB, lowest random IO numbers of all. All USB ports are behind an USB hub and it's already known that performance on the USB OTG port is higher. Unfortunately my SSD with both enclosures prevented negotiating an USB connection on the OTG port since each time I connected the SSD the following happened: WARN::dwc_otg_hcd_hub_control:2544: Overcurrent change detected )   ODROID-C2 @ 1536 MHz / 3.14.74-odroidc2 / xenial / USB Mass Storage:   Sequential transfer speeds with USB: ~39MB/s with 1MB record size and ~10.5MB/s at 4K/100MB. All USB ports are behind an USB hub and the performance numbers look like there's always some buffering involved (not true disk test but kernel's caches involved partially)   Orange Pi PC @ 1296 MHz / 4.7.2-sun8i / xenial / UAS:   Sequential transfer speeds with USB: ~40MB/s with 1MB record size and ~9MB/s at 4K/100MB, best random IO with very small files. All USB ports are independant (just like on Orange Pi Plus 2E where identical results will be achieved since same SoC and same settings when running Armbian)   Banana Pi Pro @ 960 MHz / 4.6.3-sunxi / xenial / SATA-SSD vs. USB-HDD:   This test setup is totally different since the SSD will be connected through SATA and I use a normal HDD in an UAS incapable disk enclosure to show how huge the performance differences are.   SATA sequential transfer speeds are unbalanced for still unknown reasons: write/read ~40/170MB/s with 1MB record size, 16/44MB/s with 4K/100MB (that's huge compared to all the USB numbers above!). Best random IO numbers (magnitudes faster since no USB-to-SATA bottleneck as with every USB disk is present).   The HDD test shows the worst numbers: Just 29MB/s sequential speeds at 1MB record size and only ~5MB/s with 4K/100MB. Also the huge difference between the tests with 1MB vs. 100MB data size with 4K record size shows clearly that with 1MB test size only the HDD's internal DRAM cache has been tested (no disk involved): this was not a disk test but a disk cache test only.   Lessons to learn? HDDs are slow. Even that slow that they are the bottleneck and invalidate every performance test when you want to test the performance of the host (the SBC in question) With HDDs data size matters since you get different results depending on whether the benchmark runs inside the HDD's internal caches or not. SSDs behave here differently since they do not contain ultra-slow rotating platters but their different types of internal storage (DRAM cache and flash) do not perform that different When you have both USB and SATA not using the latter is almost all the time simply stupid (even if sequential write performance looks identical. Sequential read speeds are way higher, random IO will always be superiour and this is more important) It always depends on the use case in question. Imagine you want to set up a lightweight web server dealing with static contents on any SBC that features only USB. Most of the accessed files are rather small especially when you configure your web server to deliver all content already pre-compressed. So if you compare random reads with 4k and 16k record size and 100MB data size you'll notice that a good SD card will perform magnitudes faster! For small files (4k) it's ~110 IOPS (447 KB/s) vs. 1950 IOPS (7812 KB/s) so SD card is ~18 times faster, for 16k size it's ~110 IOPS (1716 KB/s) vs. 830 IOPS (13329 KB/s) so SD card is still 7.5 times faster than USB disk. File size has to reach 512K to let USB disk perform as good as the SD card! Please note that I used a Samsung Pro 64GB for this test. The cheaper EVO/EVO+ with 32 and 64GB show identical sequential transfer speeds while being a lot faster when it's about random IO with small files. So you save money and get better performance by choosing the cards that look worse by specs! Record size always matters. Most fs accesses on an SBC are not large data that will be streamed but small chunks of randomly read/written data. Therefore check random IO results with small record sizes since this is important and have a look at the comparison of 1MB vs. 100 MB data size to get the idea when you're only testing your disk's caches and when your disk in reality. If you compare random IO numbers from crap SD cards (Kingston, noname, Verbatim, noname, PNY, noname, Intenso, noname and so on) with the results above then even the slow HDD connected through USB can shine. But better SD cards exist and some pretty fast eMMC implementations on some boards (ODROID-C2 being the best performer here). By comparing with the SSD results you get the idea how to improve performance when your workload depends on that (desktop Linux, web server, database server). Even a simple 'apt-get upgrade' when done after months without upgrades heavily depends on fast random IO (especially writes).   So by relying on the usual bullshit benchmarks only showing sequential transfer speeds a HDD (30 MB/s) and a SD card (23 MB/s) seem to perform nearly identical while in reality the way more important random IO performance might differ a lot. And this solely depends on the SD card you bought and not on the SBC you use! For many server use cases when small file accesses happen good SD cards or eMMC will be magnitudes faster than HDDs (again, it's mostly about random IO and not sequential transfer speeds).   I personally used/tested SD cards that show only 37 KB/s when running the 16K random write test (some cheap Intenso crap). Compared to the same test when combining A20 with a SATA SSD this is 'just' over 800 times slower (31000 KB/s). Compared to the best performer we currently know (EVO/EVO+ with 32/64GB) this is still 325 times slower (12000 KB/s). And this speed difference (again: random IO) will be responsible for an 'apt-get upgrade' with 200 packages on the Intenso card taking hours while finishing in less than a minute on the SATA disk and in 2 minutes with the good Samsung cards given your Internet connection is fast enough.
  21. Like
    lomady reacted to Gravelrash in HOWTO : NFS Server and Client Configuration   
    We will be configuring in a mode that allows both NFSv3 and NFSv4 clients to connect to access to the same share.
     
    Q. What are the benefits of an NFS share over a SAMBA share?
    A. This would take an article in itself! But as a brief reasoning, In a closed network (where you know every device), NFS is a fine choice. With a good network, throughput it disgustingly fast and at the same time less CPU intensive on the server. There are a lot of applications that support NFS "Out of the box" KODI being a common example. It's very simple to set up and you can toggle readonly on shares you don't need to be writeable. It only gets complicated if you want to start layering on security through LDAP/gssd. It's capable of very complex and complete security mechanisms... But you don't need them in this scenario.
     
    Q.   Why are we doing this and not limiting to NFSv4?
    A.   Flexibility, this will allow almost any device you have that supports reading and mounting NFS shares to connect and will also allow the share to be used to boot your clients from which will allow fast testing of new kernels etc.
     
    Q. Why would I want to do this?
    A. You can boot your dev boards from the NFS share to allow you to test new kernels quickly and simply
    A. You can map your shared locations in Multimedia clients quickly and easily.
    A. Your friends like to struggle with SAMBA so you can stay native and up your "geek cred"
     
    This HOWTO: will be split into distinct areas,
         "Section One" Install and Configure the server
         "Section Two" Configure client access.
         "Section Three" Boot from NFS share. (a separate document in its own right that will be constructed shortly).
     
     
    "Section One" Install and Configure the server

    Install NFS Server and Client
    apt-get update ; apt-get upgrade ; apt-get install autofs nfs-kernel-server nfs-common --install-recommends -f -y ; Now reboot
    sync ; reboot ; I will be making the following assumptions (just amend to suit your environment)
    1. You already have a working system!
    2. Your media to be shared is mounted via fstab by its label, in this case Disk1
    3. The mounted disk resides in the following folder /media/Disk1
    4. this mount has a folder inside called /Data

    Configure NFS Server (as root)
    cp -a /etc/exports /etc/exports.backup
    Then open and edit the /etc/exports file using your favourite editing tool:
    Edit and comment out ANY and ALL existing lines by adding a “#†in front the line.
    Setup NFS share for each mounted drive/folder you want to make available to the client devices
    the following will allow both v3 and v4 clients to access the same files
    # Export media to allow access for v3 and v4 clients /media/Disk1/Data *(rw,sync,insecure,no_root_squash,no_subtree_check,nohide) Explanation of above chosen parameters
    rw – allows read/write if you want to be able to delete or rename file
    async – Increases read/write performance. Only to be used for non-critical files.
    insecure – This setting allow clients (eg. Mac OS X) to use non-reserved ports to connect to the NFS server.
    no_subtree_check – Improves speed and reliability by eliminating permission checks on parent directories.
    nohide - makes it visible
    no_root_squash - *enables* write access for the connecting device root use on a NFS share
     
    Further explanations if you so desire can be found in the man page for NFS or from the following link
    http://linux.die.net/man/5/exports

    Starting / Stopping / Restarting NFS
    Once you have setup NFS server, you can Start / Stop / Restart NFS using the following command as root
    # Restart NFS server service nfs-kernel-server restart # Stop NFS server service nfs-kernel-server stop # Start NFS server # needed to disconnect already connected clients after changes have been made service nfs-kernel-server start Any time you make changes to /etc/exports in this scenario it is advisable to re export your shares using the following command as root
    export -ra Ok now we have the shares setup and accessible, we can start to use this for our full fat linux/mac clients and or configure our other "dev boards"  to boot from the NFS share(s).


    "Section Two" Configure client access.
     
    We now need to interrogate our NFS server to see what is being exported, use the command showmount and the servers ip address
    showmount -e "192.168.0.100" Export list for 192.168.0.100: /mnt/Disk1 * In this example it shows that the path we use to mount or share is "/mnt/Disk1" (and any folder below that is accessible to us)
    NFS Client Configuration v4 - NFSv4 clients must connect with relative address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=4 192.168.xxx.xxx:/ /home/â€your user nameâ€/nfs4 The mount point directory /home/â€your user nameâ€/nfs4 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs4 directory.
     
    Another way to mount an NFS share from your *server is to add a line to the /etc/fstab file. The basic needed is as below
    192.168.xxx.xxx:/    /home/â€your user nameâ€/nfs4    nfs    auto    0    0 NFS Client Configuration v3 - NFSv3 clients must use full address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=3 192.168.xxx.xxx:/media/Disk1/Data /home/â€your user nameâ€/nfs3 The mount point directory /home/â€your user nameâ€/nfs3 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs3 directory.
     
    "Section Three" Boot from NFS share.
    please refer to this document  on creating the image https://github.com/igorpecovnik/lib/blob/master/documentation/main-05-fel-boot.md - there will be an additional HOWTO & amendment to this HOWTO to cover this topic in more detail.