Jump to content

lomady

Members
  • Posts

    35
  • Joined

  • Last visited

Reputation Activity

  1. Like
    lomady reacted to ayufan in One (bsp) Kernel f RK3399/RK3328 and RK3288   
    I posted this comment: 
    that discribes a little my kernel
     
  2. Like
    lomady reacted to Igor in opi prime hangs   
    Please switch to nightly (armbian-config), update and repeat the tests. You should be running 4.17.4
  3. Like
    lomady got a reaction from Igor in opi prime hangs   
    Okay, my stress test runs stable for several hours now with 4.17.4 kernel.
    Thumbs up!
  4. Like
    lomady got a reaction from hrip6 in No network after upgrade to 5.30   
    FYI headless legacy kernel Orange Pi Plus 2E xenial and jessie updated fine.
    Thank you, developers! 
  5. Like
    lomady reacted to tkaiser in Some storage benchmarks on SBCs   
    Since I've seen some really weird disk/IO benchmarks made on SBCs the last days and both a new SBC and a new SSD arrived in the meantime I thought let's give it a try with a slightly better test setup.
      I tested with 4 different SoC/SBC: NanoPi M3 with an octa-core S5P6818 Samsung/Nexell SoC, ODROID-C2 featuring a quad-core Amlogic S905, Orange Pi PC with a quad-core Allwinner H3 and an old Banana Pi Pro with a dual-core A20. The device considered the slowest (dual-core A20 with just 960 MHz clockspeed) is the fastest in reality when it's about disk IO.   Since most if not all storage 'benchmarks' for SBC moronically focus on sequential transfer speeds only and completely forget that random IO is way more important on any SBC (it's not a digital camera or video recorder!) I tested this also. Since it's also somewhat moronically when you want to test the storage implementation of a computer to choose a disk that is limited the main test device is a brand new Samsung SSD 750 EVO 120GB I tested first on a PC whether the SSD is ok and to get a baseline what to expect.   Since NanoPi M3, ODROID-C2 and Orange Pi PC only feature USB 2.0 I tested with 2 different USB enclosures that are known to be USB Attached SCSI (UAS) capable. The nice thing with UAS is that while it's an optional USB feature that came together with USB 3.0 we can use it with more recent sunxi SoCs also when running mainline kernel (A20, H3, A64 -- all only USB 2.0 capable).   When clicking on the link you can also see how different USB enclosures (to be more precise: the included USB-to-SATA bridges used) perform. Keep that in mind when you see 'disk performance' numbers somewhere and people write SBC A would be 2MB/s faster than SBC B -- for the variation in numbers not only the SBC might be responsible but this is for sure also influenced by both the disk used and enclosure / USB-SATA bridge inside! The same applies to the kernel the SBC is running. So never trust in any numbers you find on the internet that are the results of tests at different times, with different disks or different enclosures. The numbers presented are just BS.   The two enclosures I tested with are equipped with JMicron JMS567 and ASMedia ASM1153. With sunxi SBCs running mainline kernel UAS will be used, with other SoCs/SBC or running legacy kernels it will be USB Mass Storage instead. Banana Pi Pro is an exception since its SoC features true SATA (with limited sequential write speeds) which will outperform every USB implementation. And I also used a rather fast SD card and also a normal HDD with this device connected to USB with a non UASP capable disk enclosure to show how badly this affects the important performance factors (again: random IO!)   I used iozone with 3 different runs: 1 MB test size with 1k, 2k and 4k record sizes 100 MB test size with 4k, 16k, 512k, 1024k and 16384k (16 MB) record sizes 4 GB test size with 4k and 1024k record sizes The variation in results is interesting. If 4K results between 1 and 100 MB test size differ you know that your benchmark is not testing disk througput but instead the (pretty small) disk cache. Using 4GB for sequential transfer speeds ensures that the whole amount of data exceeds DRAM size.   The results:   NanoPi M3 @ 1400 MHz / 3.4.39-s5p6818 / jessie / USB Mass Storage:   Sequential transfer speeds with USB: 30MB/s with 1MB record size and just 7.5MB/s at 4K/100MB, lowest random IO numbers of all. All USB ports are behind an USB hub and it's already known that performance on the USB OTG port is higher. Unfortunately my SSD with both enclosures prevented negotiating an USB connection on the OTG port since each time I connected the SSD the following happened: WARN::dwc_otg_hcd_hub_control:2544: Overcurrent change detected )   ODROID-C2 @ 1536 MHz / 3.14.74-odroidc2 / xenial / USB Mass Storage:   Sequential transfer speeds with USB: ~39MB/s with 1MB record size and ~10.5MB/s at 4K/100MB. All USB ports are behind an USB hub and the performance numbers look like there's always some buffering involved (not true disk test but kernel's caches involved partially)   Orange Pi PC @ 1296 MHz / 4.7.2-sun8i / xenial / UAS:   Sequential transfer speeds with USB: ~40MB/s with 1MB record size and ~9MB/s at 4K/100MB, best random IO with very small files. All USB ports are independant (just like on Orange Pi Plus 2E where identical results will be achieved since same SoC and same settings when running Armbian)   Banana Pi Pro @ 960 MHz / 4.6.3-sunxi / xenial / SATA-SSD vs. USB-HDD:   This test setup is totally different since the SSD will be connected through SATA and I use a normal HDD in an UAS incapable disk enclosure to show how huge the performance differences are.   SATA sequential transfer speeds are unbalanced for still unknown reasons: write/read ~40/170MB/s with 1MB record size, 16/44MB/s with 4K/100MB (that's huge compared to all the USB numbers above!). Best random IO numbers (magnitudes faster since no USB-to-SATA bottleneck as with every USB disk is present).   The HDD test shows the worst numbers: Just 29MB/s sequential speeds at 1MB record size and only ~5MB/s with 4K/100MB. Also the huge difference between the tests with 1MB vs. 100MB data size with 4K record size shows clearly that with 1MB test size only the HDD's internal DRAM cache has been tested (no disk involved): this was not a disk test but a disk cache test only.   Lessons to learn? HDDs are slow. Even that slow that they are the bottleneck and invalidate every performance test when you want to test the performance of the host (the SBC in question) With HDDs data size matters since you get different results depending on whether the benchmark runs inside the HDD's internal caches or not. SSDs behave here differently since they do not contain ultra-slow rotating platters but their different types of internal storage (DRAM cache and flash) do not perform that different When you have both USB and SATA not using the latter is almost all the time simply stupid (even if sequential write performance looks identical. Sequential read speeds are way higher, random IO will always be superiour and this is more important) It always depends on the use case in question. Imagine you want to set up a lightweight web server dealing with static contents on any SBC that features only USB. Most of the accessed files are rather small especially when you configure your web server to deliver all content already pre-compressed. So if you compare random reads with 4k and 16k record size and 100MB data size you'll notice that a good SD card will perform magnitudes faster! For small files (4k) it's ~110 IOPS (447 KB/s) vs. 1950 IOPS (7812 KB/s) so SD card is ~18 times faster, for 16k size it's ~110 IOPS (1716 KB/s) vs. 830 IOPS (13329 KB/s) so SD card is still 7.5 times faster than USB disk. File size has to reach 512K to let USB disk perform as good as the SD card! Please note that I used a Samsung Pro 64GB for this test. The cheaper EVO/EVO+ with 32 and 64GB show identical sequential transfer speeds while being a lot faster when it's about random IO with small files. So you save money and get better performance by choosing the cards that look worse by specs! Record size always matters. Most fs accesses on an SBC are not large data that will be streamed but small chunks of randomly read/written data. Therefore check random IO results with small record sizes since this is important and have a look at the comparison of 1MB vs. 100 MB data size to get the idea when you're only testing your disk's caches and when your disk in reality. If you compare random IO numbers from crap SD cards (Kingston, noname, Verbatim, noname, PNY, noname, Intenso, noname and so on) with the results above then even the slow HDD connected through USB can shine. But better SD cards exist and some pretty fast eMMC implementations on some boards (ODROID-C2 being the best performer here). By comparing with the SSD results you get the idea how to improve performance when your workload depends on that (desktop Linux, web server, database server). Even a simple 'apt-get upgrade' when done after months without upgrades heavily depends on fast random IO (especially writes).   So by relying on the usual bullshit benchmarks only showing sequential transfer speeds a HDD (30 MB/s) and a SD card (23 MB/s) seem to perform nearly identical while in reality the way more important random IO performance might differ a lot. And this solely depends on the SD card you bought and not on the SBC you use! For many server use cases when small file accesses happen good SD cards or eMMC will be magnitudes faster than HDDs (again, it's mostly about random IO and not sequential transfer speeds).   I personally used/tested SD cards that show only 37 KB/s when running the 16K random write test (some cheap Intenso crap). Compared to the same test when combining A20 with a SATA SSD this is 'just' over 800 times slower (31000 KB/s). Compared to the best performer we currently know (EVO/EVO+ with 32/64GB) this is still 325 times slower (12000 KB/s). And this speed difference (again: random IO) will be responsible for an 'apt-get upgrade' with 200 packages on the Intenso card taking hours while finishing in less than a minute on the SATA disk and in 2 minutes with the good Samsung cards given your Internet connection is fast enough.
  6. Like
    lomady reacted to Gravelrash in HOWTO : NFS Server and Client Configuration   
    We will be configuring in a mode that allows both NFSv3 and NFSv4 clients to connect to access to the same share.
     
    Q. What are the benefits of an NFS share over a SAMBA share?
    A. This would take an article in itself! But as a brief reasoning, In a closed network (where you know every device), NFS is a fine choice. With a good network, throughput it disgustingly fast and at the same time less CPU intensive on the server. There are a lot of applications that support NFS "Out of the box" KODI being a common example. It's very simple to set up and you can toggle readonly on shares you don't need to be writeable. It only gets complicated if you want to start layering on security through LDAP/gssd. It's capable of very complex and complete security mechanisms... But you don't need them in this scenario.
     
    Q.   Why are we doing this and not limiting to NFSv4?
    A.   Flexibility, this will allow almost any device you have that supports reading and mounting NFS shares to connect and will also allow the share to be used to boot your clients from which will allow fast testing of new kernels etc.
     
    Q. Why would I want to do this?
    A. You can boot your dev boards from the NFS share to allow you to test new kernels quickly and simply
    A. You can map your shared locations in Multimedia clients quickly and easily.
    A. Your friends like to struggle with SAMBA so you can stay native and up your "geek cred"
     
    This HOWTO: will be split into distinct areas,
         "Section One" Install and Configure the server
         "Section Two" Configure client access.
         "Section Three" Boot from NFS share. (a separate document in its own right that will be constructed shortly).
     
     
    "Section One" Install and Configure the server

    Install NFS Server and Client
    apt-get update ; apt-get upgrade ; apt-get install autofs nfs-kernel-server nfs-common --install-recommends -f -y ; Now reboot
    sync ; reboot ; I will be making the following assumptions (just amend to suit your environment)
    1. You already have a working system!
    2. Your media to be shared is mounted via fstab by its label, in this case Disk1
    3. The mounted disk resides in the following folder /media/Disk1
    4. this mount has a folder inside called /Data

    Configure NFS Server (as root)
    cp -a /etc/exports /etc/exports.backup
    Then open and edit the /etc/exports file using your favourite editing tool:
    Edit and comment out ANY and ALL existing lines by adding a “#†in front the line.
    Setup NFS share for each mounted drive/folder you want to make available to the client devices
    the following will allow both v3 and v4 clients to access the same files
    # Export media to allow access for v3 and v4 clients /media/Disk1/Data *(rw,sync,insecure,no_root_squash,no_subtree_check,nohide) Explanation of above chosen parameters
    rw – allows read/write if you want to be able to delete or rename file
    async – Increases read/write performance. Only to be used for non-critical files.
    insecure – This setting allow clients (eg. Mac OS X) to use non-reserved ports to connect to the NFS server.
    no_subtree_check – Improves speed and reliability by eliminating permission checks on parent directories.
    nohide - makes it visible
    no_root_squash - *enables* write access for the connecting device root use on a NFS share
     
    Further explanations if you so desire can be found in the man page for NFS or from the following link
    http://linux.die.net/man/5/exports

    Starting / Stopping / Restarting NFS
    Once you have setup NFS server, you can Start / Stop / Restart NFS using the following command as root
    # Restart NFS server service nfs-kernel-server restart # Stop NFS server service nfs-kernel-server stop # Start NFS server # needed to disconnect already connected clients after changes have been made service nfs-kernel-server start Any time you make changes to /etc/exports in this scenario it is advisable to re export your shares using the following command as root
    export -ra Ok now we have the shares setup and accessible, we can start to use this for our full fat linux/mac clients and or configure our other "dev boards"  to boot from the NFS share(s).


    "Section Two" Configure client access.
     
    We now need to interrogate our NFS server to see what is being exported, use the command showmount and the servers ip address
    showmount -e "192.168.0.100" Export list for 192.168.0.100: /mnt/Disk1 * In this example it shows that the path we use to mount or share is "/mnt/Disk1" (and any folder below that is accessible to us)
    NFS Client Configuration v4 - NFSv4 clients must connect with relative address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=4 192.168.xxx.xxx:/ /home/â€your user nameâ€/nfs4 The mount point directory /home/â€your user nameâ€/nfs4 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs4 directory.
     
    Another way to mount an NFS share from your *server is to add a line to the /etc/fstab file. The basic needed is as below
    192.168.xxx.xxx:/    /home/â€your user nameâ€/nfs4    nfs    auto    0    0 NFS Client Configuration v3 - NFSv3 clients must use full address
    Use the mount command to mount a shared NFS directory from another machine, by typing a command line similar to the following at a terminal prompt as the root user (where 192.168.xxx.xxx is the server I.P. address)
    mount -t nfs -o vers=3 192.168.xxx.xxx:/media/Disk1/Data /home/â€your user nameâ€/nfs3 The mount point directory /home/â€your user nameâ€/nfs3 must already exist and there should be no files or subdirectories in the /home/â€your user nameâ€/nfs3 directory.
     
    "Section Three" Boot from NFS share.
    please refer to this document  on creating the image https://github.com/igorpecovnik/lib/blob/master/documentation/main-05-fel-boot.md - there will be an additional HOWTO & amendment to this HOWTO to cover this topic in more detail.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines