Jump to content

Alexio

Members
  • Posts

    6
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Alexio reacted to tkaiser in [preview] Generate OMV images for SBC with Armbian   
    Well, I expect the HC2 (or HC1+ or whatever HC$something will be released by Hardkernel the next time) to be fully software compatible. It will be just another variant using the same SoC and a JMicron USB-to-SATA bridge (maybe just like with their Cloudshell 2 using JMS561 which will then lead to the same issues as today with this Cloudshell thing). The thread title there has a reason  
     
     
    No, since all the results of our work are still there (we don't fiddle around manually with OS images but each and every improvement gets commited to the build system so everyone on this planet is enabled to build fresh OMV images from scratch as often as he wants to). It's just that there's currently nothing to improve and it also makes no sense to provide more OMV images since every Debian based Armbian image using kernel 4.x can be transformed into an OMV installation with 'armbian-config --> Software --> Softy --> Install OMV' (takes care about almost all the performance tweaks our OMV images contain).
     
    To finalize this a summary what 'Armbian + OMV' exactly means and what the result of this 7 month journey was. It was about identifying problems (both technical and user experience) and developing optimal settings for performance and energy efficiency. And now we're there.
     
    So if you want to run OMV on an SBC simply check whether the board is supported by Armbian with a recent kernel and you're almost done. If you want to do yourself a favour then click here on GbE (to avoid boards that are bottlenecked by slow networking) and have in mind that some stuff that looks nice on paper like Allwinner's 'native SATA' performs pretty poor in reality (check SBC storage performance overview)
     
    While you basically can turn every SBC Debian Jessie or Stretch OS image into either an OMV 3 or 4 installation the real differences are as follows:
     
    1) Armbian as base:
    we care about kernel support, providing for all relevant platforms pretty recent kernel versions that enable all the features needed by OMV. Difference to other distros or OS images: they often use horribly outdated kernels that lack features needed for OMV working properly (please compare with the OMV related kernel config changes) our OS images contain several performance and/or consumption tweaks, eg. we optimize IO scheduler based on type of storage, we take care about IRQ affinity (on almost all other SBC distros all interrupts are processed on the first CPU core which will result in lower performance) and optimized cpufreq governor settings (allowing the board to idle at minimal consumption but switching to highest performance immediately if needed) We try to use modern protocols, eg. enabling 'USB Attached SCSI' (UAS) where possible while taking care also of broken USB enclosures that get blacklisted automatically. UAS allows for higher USB storage performance with less CPU utilization at the same time. Armbian contains powerful support tools that allow to remotely diagnose problems very easily (only problem: here in the forum we play mind-readers instead of asking for armbianmonitor output all the time) 2) Armbian's OMV/NAS performance/reliability tweaks:
    we use improved Samba settings to increase SMB performance especially on the weaker boards Several file sharing daemons that usually store caches on the rootfs are forced to use RAM instead (heavily increases performance in some areas and also helps with SD cards wearing out too fast) Enabling driver support for the only two USB3 GbE dongles (ASIX AX88179 and the better RealTek RTL8153) so even boards with only Fast Ethernet or without Ethernet can be used as NAS with those dongles 3) Armbian's OMV integration tweaks:
    we take care that we set in /etc/default/openmediavault three variables that heavily influence board performance once a user clicks around in OMV's 'Power Management' settings. Without this tweak otherwise OMV defines 'powersave' cpufreq governor which is totally fine on x86 systems but can result in a horrible performance decrease on ARM boards with kernels that then remain all the time on the lowest possible CPU frequency (on some systems like ODROID-XU4 or HC1 this can make a difference of 200 MHz vs. 2 GHz!) we install and enable OMV's flashmemory plugin by default to reduce wear on the SD card and to speed certain things up. WIthout this plugin OMV installations running off SD cards or eMMC might pretty fast fail due to flash media already worn out (way higher write amplification without the plugin will lead to your storage media dying much faster) All these 9 tweaks above make the difference and are responsible for such an 'Armbian OMV' consuming less energy while performing a lot better than distros that ignore all of this. I tested the last months a lot also with other OS images where OMV has been installed without any tweaks and Armbian's way was always way faster (biggest difference on ODROID-XU4 where I tested with OS images that showed not even 50% of our performance).
     
    That being said it's really as easy as: Choose a sufficient board (GbE, fast storage) supported by Armbian's next branch (so kernel version recent enough for all features to work), choose either Jessie (for OMV 3) or Stretch (OMV 4) and call armbian-config to let OMV be installed (few minutes on a fast SD card, if you have to wait ages it's highly recommended to throw the SD card away and start over from here).
     
    Just as a reference: the dedicated OMV images I generated the last half year do include another bunch of minor tweaks compared to installing OMV with armbian-config:
    Disable Armbian's log2ram in favour of OMV's folder2ram Automatically setting the correct timezone at first boot based on geo location (IP address) Device support for Cloudshell 2 (checks presence on I2C bus and then install's Hardkernel's support stuff to get display and fan working) Device workaround for buggy Cloudshell 1 (checks presence on USB bus) Cron job executed every minute to improve IO snappyness of filesharing daemons and moving them to the big cores on ODROID XU4 Making syslog less noisy via /etc/rsyslog.d/omv-armbian.conf Replacing swap entirely with zram
    Limit rootfs resize to 7.3 GB and automatically creating a 3rd partition using the remaining capacity that only needs to be formatted manually and can then be used as a data share
    All tweaks can be studied here and by using this script as customize-image.sh you can still build fully optimized OMV images for every board Armbian supports with a next kernel branch.
     
    And now may I ask a moderator to lock this thread from now on so new issues will be discussed separately? Thank you!
  2. Like
    Alexio reacted to tkaiser in ODROID HC1 / HC2   
    ODROID HC1 Mini Review
     
    ODROID XU4 is one of the most powerful boards Armbian supports (due to being a big.LITTLE design with 4 x A15@2GHz and 4 x A7@1.4GHz). Though the SoC is a bit old performance is still impressive, the SoC features 2 independent USB3 ports (on XU3/XU4 one is connected to an USB Gigabit Ethernet chip, the other to an internal USB3 hub providing 2 USB3 receptacles) but in the past many users had troubles with USB3 due to
     
    cable/connector problems (the USB3-A receptacle specification/design is a total fail IMO, unfortunately we've still not USB-C everywhere) underpowering problems with Hardkernel's Cloudshell 1 or XU4 powered disk enclosures and also some real UAS issues with 4.9 kernel (UAS --> USB Attached SCSI, a way more efficient method to access USB storage compared to the old MassStorage protocol)  
    Hardkernel's answer to these issues was a stripped down XU4 version dropping the internal USB hub, display and GPIO support but adding instead a great performing and UAS capable USB-to-SATA bridge (JMS578) directly to the PCB so no more cable/contact issues, no more underpowering and no more UAS hassles with some disk enclosures (that contain a broken SATA bridge or combine a working UAS capable chip with a branded/broken firmware).
     
    Thanks to Hardkernel I received a developer sample a week ago and could test/optimize already. I skip posting nice pictures and general information since as usual everything available at CNX already: just take a look here and there (especially take the additional hour to read through comments!).
     
    So let's focus on stuff not already covered:

     
     
    On the bottom side of the PCB in this area the Exynos 5422 SoC is located and attached with thermal paste to the black aluminium enclosure acting as a giant heatsink. While this works pretty well the octa-core SoC also dissipates heat into the PCB so if you plan to continously let this board run under full load better check temperature of the SD card slot if you're concerned about the temperature range of the SD card used Green led used by JMS578, starts to blink with disk activity and might be better located next to SD card slot on a future PCB revision for stacked/cluster use cases Blue led used as heartbeat signal by default (adjustable --> /sys/class/leds/blue/trigger but only none, mmc1 and heartbeat available, no always-on) Red led (power, lights solid when board is powered on) Drilled hole to fix a connected 2.5" disk. I didn't find a screw of the necessary length -- few more mm are needed than normal -- so I hope Hardkernels adds one when shipping HC1 to securely mount disks (7mm - 15mm disk height supported) Not an issue (was only one for me)! Hardkernel responded: 'We’ve been shipping the HC1 with a machine screw (M3 x 8mm) except for several early engineering samples. So all new users will receive the screw and they can fasten the HDD/SSD out of the box.'  
    In my tests I found it a bit surprising that position of the heatsink fins doesn't matter that much (one would think with heatsink fins on top as shown on the right above reported temperatures would be a lot lower compared to the left position but that's not the case -- maybe 1°C difference -- which is also good news since the HC1 is stackable). The whole enclosure gets warm so heat disspation works really efficient here and when running some demanding benchmarks on HC1 and XU4 in parallel HC1 always performed better since throttling started later and was not that aggressive.
     
    I had a few network problems in my lab the last days but could resolve them and while testing with optimal settings I decided also to move the usb5 interrupt (handling the USB3 port to which the USB Gigabit Ethernet adapter on ODROID XU3/XU4/HC1/HC2/MC1 is connected) from cpu3 (little) to cpu7 (big core):

    As a quick comparison performance with same OS image but with XU4 and same SSD in an external JMS576 enclosure this time:
     
    Tested with our Armbian based OMV image, kernel 4.9.37, ondemand cpufreq governor (clocking little/big cores down to 600 MHz when idle and allowing 1400/2000 MHz under full load), some ondemand tweaks (io_is_busy being the most important one) and latest IRQ affinity tweak. The disk I tested with is my usual Samsung EVO840 (so results are comparable with numbers for other Armbian supported devices -- see here). When combining HC1 with spinning rust (especially with older notebook HDDs) it's always worth considering adjusting cpufreq settings since most HDDs aren't that fast anyway so you could adjust /etc/default/cpufrequtils and configure there for example 1400 MHz max limiting also the big cluster to 1.4 GHz max which won't hurt performance with HDDs anyway but you'll have ~3W less consumption with full performance NAS workloads.
     
    While this is stuff for another review it should be noted that DVFS (dynamic voltage frequency scaling) has a huge impact on consumption especially with higher clockspeeds (allowing the cores to clock down to 200 MHz instead of 600 MHz somewhat destroys overall performance but saves you only ~0.1W on idle consumption while limiting the big cluster's maximum cpufreq from 2000 MHz down to 1400 MHz saves you ~3W under full load while retaining same NAS performance at least when using HDDs). Also HDD temperature is a consideration since the giant heatsink also transfers some heat back into a connected disk though with my tests this led to a maximum increase of 7°C (SSD temperature read out via SMART after running a heavy benchmark on HC1 comparing temperatures on SSD attached to HC1 and attached to a XU4 where SSD was lying next to the board).
     
    People who want to run other stuff on HC1 in parallel might think about letting Ethernet IRQs being handled by a little core again (reverting my latest change to move usb5 interrupts from cpu3 to cpu7) since this only slightly affects top NAS write performance while leaving the powerful big cores free to do other stuff in parallel. IRQ/SMP/HMP affinity might become a matter of use case on HC1 so it's not easy to find settings that work perfect everywhere. To test through all this stuff in detail and to really give good advises wrt overall consumption savings I lack the necessary equipment (would need some sort of 'smart powermeter' that can feed the results in my monitoring system) so I'll leave these tests for others.
     
    So let's finish with first preliminary conclusions:
     
    HC1 is a very nice design addressing a few of XU3/XU4 former USB3 issues (mostly related to 'hardware' issues like cable/contact problems with USB3-A or underpowering) Heat dissipation works very well (and combining this enclosure design with a huge, slow and therefore silent fan is always an option, Hardkernel evens sells a large fan suitable for 4 HC1 or MC1 units) The used JMS578 bridge chip to provide SATA access is a really good choice since this IC does not only support UAS (USB Attached SCSI, way more efficient than the older MassStorage protocol and also the reason why SSDs perform twice as fast on HC1 now with 4.x kernel) but also SAT ('SCSI / ATA Translation') so you can make use of tools like hdparm/hdidle without any issues and also TRIM (though software support is lacking at least in 4.9 kernel tested with) Dealing with attached SATA disks is way more reliable than on other 'USB only' platforms since connection/underpowering problems are solved Only downside is the age/generation of the Exynos 5422 SoC since being an ARMv7 design it's lacking for example 'ARM crypto extensions' compared to some more recent ARM SoCs which can do stuff like AES encryption a lot more efficient/faster  
    Almost forgot: Software support efforts needed for HC1 (or the other variants MC1 or HC2) are zero since Hardkernel kept everything 100% compatible to ODROID XU4  
     
    Edit: Updated 'screw situation' after Hardkernel explained they ship with an M3/8mm screw by default
     
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines