ODROID HC1
3 3

28 posts in this topic

Recommended Posts

ODROID HC1 Mini Review

 

ODROID XU4 is one of the most powerful boards Armbian supports (due to being a big.LITTLE design with 4 x A15@2GHz and 4 x A7@1.4GHz). Though the SoC is a bit old performance is still impressive, the SoC features 2 independent USB3 ports (on XU3/XU4 one is connected to an USB Gigabit Ethernet chip, the other to an internal USB3 hub providing 2 USB3 receptacles) but in the past many users had troubles with USB3 due to

 

  • cable/connector problems (the USB3-A receptacle specification/design is a total fail IMO, unfortunately we've still not USB-C everywhere)
  • underpowering problems with Hardkernel's Cloudshell 1 or XU4 powered disk enclosures
  • and also some real UAS issues with 4.9 kernel (UAS --> USB Attached SCSI, a way more efficient method to access USB storage compared to the old MassStorage protocol)

 

Hardkernel's answer to these issues was a stripped down XU4 version dropping the internal USB hub, display and GPIO support but adding instead a great performing and UAS capable USB-to-SATA bridge (JMS578) directly to the PCB so no more cable/contact issues, no more underpowering and no more UAS hassles with some disk enclosures (that contain a broken SATA bridge or combine a working UAS capable chip with a branded/broken firmware).

 

Thanks to Hardkernel I received a developer sample a week ago and could test/optimize already. I skip posting nice pictures and general information since as usual everything available at CNX already: just take a look here and there (especially take the additional hour to read through comments!).

 

So let's focus on stuff not already covered:

HC1_Important_stuff_highlighted.jpg

 

 

  1. On the bottom side of the PCB in this area the Exynos 5422 SoC is located and attached with thermal paste to the black aluminium enclosure acting as a giant heatsink. While this works pretty well the octa-core SoC also dissipates heat into the PCB so if you plan to continously let this board run under full load better check temperature of the SD card slot if you're concerned about the temperature range of the SD card used
  2. Green led used by JMS578, starts to blink with disk activity and might be better located next to SD card slot on a future PCB revision for stacked/cluster use cases
  3. Blue led used as heartbeat signal by default (adjustable --> /sys/class/leds/blue/trigger but only none, mmc1 and heartbeat available, no always-on)
  4. Red led (power, lights solid when board is powered on)
  5. Drilled hole to fix a connected 2.5" disk. I didn't find a screw of the necessary length -- few more mm are needed than normal -- so I hope Hardkernels adds one when shipping HC1 to securely mount disks (7mm - 15mm disk height supported) Not an issue (was only one for me)! Hardkernel responded: 'We’ve been shipping the HC1 with a machine screw (M3 x 8mm) except for several early engineering samples. So all new users will receive the screw and they can fasten the HDD/SSD out of the box.'

 

In my tests I found it a bit surprising that position of the heatsink fins doesn't matter that much (one would think with heatsink fins on top as shown on the right above reported temperatures would be a lot lower compared to the left position but that's not the case -- maybe 1°C difference -- which is also good news since the HC1 is stackable). The whole enclosure gets warm so heat disspation works really efficient here and when running some demanding benchmarks on HC1 and XU4 in parallel HC1 always performed better since throttling started later and was not that aggressive.

 

I had a few network problems in my lab the last days but could resolve them and while testing with optimal settings I decided also to move the usb5 interrupt (handling the USB3 port to which the USB Gigabit Ethernet adapter on ODROID XU3/XU4/HC1/HC2/MC1 is connected) from cpu3 (little) to cpu7 (big core):

Bildschirmfoto%202017-08-22%20um%2016.57

As a quick comparison performance with same OS image but with XU4 and same SSD in an external JMS576 enclosure this time:



Bildschirmfoto%202017-08-23%20um%2016.34

 

Tested with our Armbian based OMV image, kernel 4.9.37, ondemand cpufreq governor (clocking little/big cores down to 600 MHz when idle and allowing 1400/2000 MHz under full load), some ondemand tweaks (io_is_busy being the most important one) and latest IRQ affinity tweak. The disk I tested with is my usual Samsung EVO840 (so results are comparable with numbers for other Armbian supported devices -- see here). When combining HC1 with spinning rust (especially with older notebook HDDs) it's always worth considering adjusting cpufreq settings since most HDDs aren't that fast anyway so you could adjust /etc/default/cpufrequtils and configure there for example 1400 MHz max limiting also the big cluster to 1.4 GHz max which won't hurt performance with HDDs anyway but you'll have ~3W less consumption with full performance NAS workloads.

 

While this is stuff for another review it should be noted that DVFS (dynamic voltage frequency scaling) has a huge impact on consumption especially with higher clockspeeds (allowing the cores to clock down to 200 MHz instead of 600 MHz somewhat destroys overall performance but saves you only ~0.1W on idle consumption while limiting the big cluster's maximum cpufreq from 2000 MHz down to 1400 MHz saves you ~3W under full load while retaining same NAS performance at least when using HDDs). Also HDD temperature is a consideration since the giant heatsink also transfers some heat back into a connected disk though with my tests this led to a maximum increase of 7°C (SSD temperature read out via SMART after running a heavy benchmark on HC1 comparing temperatures on SSD attached to HC1 and attached to a XU4 where SSD was lying next to the board).

 

People who want to run other stuff on HC1 in parallel might think about letting Ethernet IRQs being handled by a little core again (reverting my latest change to move usb5 interrupts from cpu3 to cpu7) since this only slightly affects top NAS write performance while leaving the powerful big cores free to do other stuff in parallel. IRQ/SMP/HMP affinity might become a matter of use case on HC1 so it's not easy to find settings that work perfect everywhere. To test through all this stuff in detail and to really give good advises wrt overall consumption savings I lack the necessary equipment (would need some sort of 'smart powermeter' that can feed the results in my monitoring system) so I'll leave these tests for others.

 

So let's finish with first preliminary conclusions:

 

  • HC1 is a very nice design addressing a few of XU3/XU4 former USB3 issues (mostly related to 'hardware' issues like cable/contact problems with USB3-A or underpowering)
  • Heat dissipation works very well (and combining this enclosure design with a huge, slow and therefore silent fan is always an option, Hardkernel evens sells a large fan suitable for 4 HC1 or MC1 units)
  • The used JMS578 bridge chip to provide SATA access is a really good choice since this IC does not only support UAS (USB Attached SCSI, way more efficient than the older MassStorage protocol and also the reason why SSDs perform twice as fast on HC1 now with 4.x kernel) but also SAT ('SCSI / ATA Translation') so you can make use of tools like hdparm/hdidle without any issues and also TRIM (though software support is lacking at least in 4.9 kernel tested with)
  • Dealing with attached SATA disks is way more reliable than on other 'USB only' platforms since connection/underpowering problems are solved
  • Only downside is the age/generation of the Exynos 5422 SoC since being an ARMv7 design it's lacking for example 'ARM crypto extensions' compared to some more recent ARM SoCs which can do stuff like AES encryption a lot more efficient/faster

 

Almost forgot: Software support efforts needed for HC1 (or the other variants MC1 or HC2) are zero since Hardkernel kept everything 100% compatible to ODROID XU4 :) 

 

Edit: Updated 'screw situation' after Hardkernel explained they ship with an M3/8mm screw by default

 

lupus, Alexio and manuti like this

Share this post


Link to post
Share on other sites
24 minutes ago, tkaiser said:

Almost forgot: Software support efforts needed for HC1 (or the other variants MC1 or HC2) are zero since Hardkernel kept everything 100% compatible to ODROID XU4 :)

kudos for hardkernel !!!
... and of course to @tkaiser thanks for sharing this mini review.

Share this post


Link to post
Share on other sites

Some consumption numbers. My measurements are done at mains with a powermeter (Brennenstuhl Primera Line PM 231E bought after german c't magazine reviewed it as one of the more precise powermeters available). Hardkernel's official 5V/4A PSU is also included in the numbers so this is neither comparable with my other consumption measurements over here or what Hardkernel provides there (since measuring between PSU and device, please also keep in mind that Hardkernel's Ubuntu uses AFAIK performance cpufreq governor keeping all CPU cores always at upper clockspeeds). Based on the efficiency of the PSU numbers might vary a lot (eg. using an usual 400W ATX PSU to provide 5V and you can add 10W easily measured on mains) but on the other hand this is a more realistic scenario given that these HC1 units need a PSU anyway to be usable (and my recommendation is to choose Hardkernel's own offers here!)

 

First a brief look at idle consumption (to get an idea how to define cpufreq settings) and with some light to medium load (sysbench --test=cpu --cpu-max-prime=2000000000 run --num-threads=`grep -c '^processor' /proc/cpuinfo`) when walking through a few different cpufreq/dvfs operation points:

little/big   idle   sysbench
   200/200   3.7W     4.1W
   600/600   3.75W    5.1W
   800/800   3.8W     5.7W
 1000/1000   3.85W    6.5W
 1200/1200   3.95W    7.4W
 1400/1400   4.0W     8.6W
 1400/1500   4.05W    9.2W
 1400/1600   4.1W    10.1W
 1400/1700   4.2W    11.0W
 1400/1800   4.2W    12.0W
 1400/1900   4.3W    13.5W
 1400/2000   4.5W    13.6W

As we can see there's not that much difference in idle consumption numbers below 1000 MHz so it doesn't really matter how low we define minimum cpufreq (Armbian chose 600 MHz here since some cpufreq governors really behave strange when you allow them to clock really low: taking ages until cpufreq gets increased even with heavy loads so overall performance sucks a lot). It's also quite obvious that the higher dvfs (dynamic voltage frequency scaling) OPP require much more energy due to increased core voltage to let CPU cores still run reliably.

 

The numbers above were made with a disk connected but sent to sleep/standby (hdparm -y /dev/sda). As soon as a connected disk is awake numbers differ:

  • with most SSDs it's a short peak consumption and then it might be 0.1W more as long as the SSD is idle. Most if not all SSD only need more juice when really accessing/transferring data
  • with HDDs things are different. Peak consumption when platters start to spin up can be really high (up to 1.2A@5V --> 6W with 2.5" HDD). Then we have some controller logic inside the HDD needing juice when data is transferred, one motor that lets the spindles rotate (while they rotate all the time consumption can vary based on HDD firmware --  a nice buzzword to search for is 'IntelliPower' for example) and then there's another stepper motor(s) to position the drive heads. Consumption increase by the latter highly depends on workloads. Sequential stuff eg. streaming a movie from a HDD that is not (much) fragmented results in nearly zero head movements while a highly random IO pattern causes the heads to be repositioned always and so consumption will increase.

With NAS use cases we also have to keep in mind that Ethernet needs some energy. Most Gigabit Ethernet capable SBC use an internal GbE MAC combined with an external GbE PHY (usually an RTL8211). When using such an external PHY we see a nice consumption decrease when we force the PHY to Fast Ethernet only. On most boards this results in 350mW less consumption compared to negotiating a Gigabit Ethernet connection. The only exceptions are funnily all ODROIDs: With C1+ there is no difference in consumption, C2shows only 230mW consumption difference (maybe since using a RTL8211F PHY instead of RTL8211E as used almost everywhere else) and with XU4/HC1 it's again no consumption difference (here Ethernet is done via an USB3 implementation: RTL8153).

 

How does Ethernet utilization changes consumption? I tested with 600, 1400 and 2000 MHz one time with Fast Ethernet ('performance' always the same since saturating a 100 Mbits/sec connection is easy and results in ~94 Mbits/sec on the TCP/IP layer without problems). And then after switching back to Gigabit Ethernet:

 600 MHz Fast ENet / GbE
    TX    +300mW   +1000mW (580 Mbits/s)
    RX    +500mW   +1300mW (790 Mbits/s)

1400 MHz Fast ENet / GbE
    TX    +350mW   +1900mW (940 Mbits/s)
    RX    +800mW   +2300mW (940 Mbits/s)

2000 MHz Fast ENet / GbE
    TX    +550mW   +1900mW (940 Mbits/s)
    RX   +1300mW   +2800mW (940 MBits/s)

All consumption numbers are relative (!) since compared to the idle numbers above. As can be seen we have no performance difference between 1.4 and 2.0 GHz but consumption varies a lot (TX numbers remain the same which is an indication that here only little cores were involed). RX consumption (receiving data from the outside) is always higher since here IRQ processing is involved and with USB Ethernet as on XU4/HC1 this is a bit more 'costly' compared to other boards that rely on RGMII here to connect an external GbE PHY with an internal MAC.

 

So from a NAS point of view limiting the big cores to eg. 1.4 GHz seems to be sufficient since we have no performance drop but same energy savings. But doing this only based by looking at iperf3 results might not be sufficient.

 

New tests with LanTest keeping both Ethernet and storage busy at the same time: First test again with the Samsung EVO840 SSD, second test with an older 2.5" Hitachi 7.2k HDD (SMART output here):

EVO840 SSD      RX           TX
 600 MHz:  46MB/s  6.7W  48MB/s  6.4W
1400 MHz:  80MB/s  8.0W  87MB/s  7.8W
2000 MHz:  90MB/s 10.8W  98MB/s 10.0W

Hiatchi HDD     RX           TX
 600 MHz:  42MB/s  7.3W  46MB/s  7.2W
1400 MHz:  66MB/s  8.8W  81MB/s  8.8W
2000 MHz:  78MB/s 11.5W  88MB/s 11.2W

(all detailed LanTest screenshots available here. I tested only one run since running with userspace governor so no surprises wrt dynamic clockspeed adjustments) 

 

Why testing with 600, 1400 and 2000 MHz? 600 MHz seems like a reasonable minimum cpufreq (given that lower frequencies do not provide sufficient idle consumption savings and that most cpufreq scaling governors behave better when minimum cpufreq isn't too low). The 2000 MHz are just the upper level that's possible. And 1400 MHz seem to be a nice compromise in between wrt consumption savings under load while not that much of a performance drop happens.

 

With those 3 different clockspeeds we see consumption differences in synthetical benchmarks. But if we also have the use case in mind in many situations these theoretical differences defining upper limits do not matter at all any more. 

 

If we use HC1 with a 2.5" HDD as backup target for example and let the necessary encryption on the clients happen (running macOS backing up with TimeMachine to our HC1 running OMV): While almost all Macs that can use TimeMachine (encryption) can also make use of AES-NI since Apple already started in 2010 using solely Intel CPUs with this feature most probably backup 'performance' is still that low that it really doesn't matter whether we let HC1 clock at 600 MHz or 2000 MHz. With such a scenario most probably switching the cpufreq governor from ondemand (constantly jumping between min and max cpufreq) to conservative we get some nice energy savings at same backup performance (which I consider more or less irrelevant anyway as long as incremental backups DO happen regularly)

 

With such a TimeMachine scenario HC1 NAS performance is only really needed when it's time for huge restores or desaster recovery (your MacBook got stolen, you buy a new one, start it, choose 'restore from backup', click on the HC1 and are two hours later up and running again). So most probably with this use case switching from ondemand to conservative governor saves 1-2W per when backing up (since keeping CPU cores at low clockspeeds most of the time) while providing same speed/consumption with large restores.

 

With different approaches (eg. letting encryption happen on HC1) this might differ a lot and to really get any insights how overall consumption with different settings might look like I lack the equipment (would need something like Hardkernel's SmartPower2 to be able to feed consumption data in my monitoring system)

 

 

TonyMac32 likes this

Share this post


Link to post
Share on other sites
6 hours ago, tkaiser said:

IMO there's no need to reference legacy kernel at all. With HC1 we're talking about Hardkernel's 4.9 or mainline


Yes. You are right. I changed the label to LTS. No better ideas ATM. Xenial and Jessie mainline are now in front.

Share this post


Link to post
Share on other sites

Mine is not connecting at gigabit - fast ethernet only. Same cable (+few others), same PSU, same SD card, any kernel, while XU4 runs at Gigabit w/o problem.

Share this post


Link to post
Share on other sites
On 27.8.2017 at 9:06 AM, Igor said:

Mine is not connecting at gigabit - fast ethernet only.

 

Well, I have no Ethernet any more at all when booting a fresh next build generated from sources. Not even any lsusb output for the onboard RTL8153 :( Tested on XU4 and the same there so it's software (maybe RTL8153 power not being provided? But I totally lost track wrt latest changes -- just discovered that there's a new XU4 u-boot branch)

 

Edit: armbianmonitor -u output made with external RTL8153 behind internal USB3 hub on XU4.

Share this post


Link to post
Share on other sites
22 hours ago, tkaiser said:

Well, I have no Ethernet any more at all when booting a fresh next build generated from sources.


While my Ethernet started to operate at gigabit speed :) after exchanging premium network cables with a cable which was waiting to be thrown away :D

 

It looks like some u-boot fine tuning is needed. 

Share this post


Link to post
Share on other sites

I know that armbian download page normally does not provide OMV images. But since OMV is well integrated in armbians build system and IMO this would be use case nr1 for this board wouldn't it make sense to provide a OMV image on the 'other download options and archive' section? 

Share this post


Link to post
Share on other sites
1 hour ago, chwe said:

I know that armbian download page normally does not provide OMV images. But since OMV is well integrated in armbians build system and IMO this would be use case nr1 for this board wouldn't it make sense to provide a OMV image on the 'other download options and archive' section? 


OMV can be installed from armbian-menu in no time. Perhaps a note about this should be added there? Debian Jessie is a must.

 

Edit: And merging those two recipes might have sense:

https://github.com/armbian/config/blob/dev/softy#L282-L316

https://github.com/armbian/build/blob/3845004787d76485be8b53d77a3c776f07f0d472/config/templates/customize-image.sh.template#L34-L205

 

Some work :)

Share this post


Link to post
Share on other sites
1 hour ago, Igor said:

OMV can be installed from armbian-menu in no time.

 

But performance sucks ;)

 

Here you go: https://pastebin.com/ctLWTq2Q (please note the unresolved issues at the top and also the switch from $family to $distribution since OMV3 on Stretch won't work)

 

Edit: please check whether ${OMV_CONFIG_FILE} really works to get improved Samba settings applied.

Igor likes this

Share this post


Link to post
Share on other sites
23 minutes ago, tkaiser said:
1 hour ago, Igor said:

OMV can be installed from armbian-menu in no time.

 

But performance sucks ;)

Is there a performance difference between those two possibilities? Cause this would end in endless complaining about OMV here or in their forum. 

Share this post


Link to post
Share on other sites
Just now, chwe said:

Is there a performance difference between those two possibilities? Cause this would end in endless complaining about OMV here or in their forum.

 

Currently yes... there is a difference since the dedicated OMV images contain a lot more tweaks. But that's the reason why I spent some minutes to prepare https://pastebin.com/ctLWTq2Q for Igor (he only has to find a solution wrt log2ram vs folder2ram, the 1 minute cron job that improves IO snappiness especially on slow boards and the Cloudshell 2 support stuff from Hardkernel).

 

And no... users won't complain since average users aren't able to realize the difference 20 MB/s more or less make or whether handling of small files improves or not. Surprisingly people buy slow boards like Banana Pis with pleasure since 'native SATA' (slow as hell) and try to avoid fast NAS boards like HC1, ROCK64 since 'USB but no native SATA'. Direct comparison: https://forum.armbian.com/index.php?/topic/5036-looking-for-board-for-nas/&do=findComment&comment=38363

 

Share this post


Link to post
Share on other sites

Yeah, not too many people would easily believe the native port is worse than a USB device.  Even when confronted with empirical data, people can't shake the idea of an intermediate adapter slowing things down.

Share this post


Link to post
Share on other sites

I tested a straight xu4 with a usb3-sata jmicron chipset to and evo840 and found large sustained writes at about 120MB/s.  I found that acceptable.  I also connected them to WD 2T Red drives and maxed their write speed on sustained writes (powered the drives from an AT power supply -- coupling the ground to avoid dark current).  I was and still am very pleased.

 

I purchased 3 HC1's and have not yet tested them for speed.  I do find the heat aggregation when stacking them to be significant.  Two of them are operating 1T SSHD and 1 has a T Evo.  I'm (today -- day 2) adapting a fan to the back of the three units.  I guess I could buy one more and buy the HK fan.

 

What can I say about the older chip set?  No AES in hardware bugs me, but not too much since the cores don't do much while plain I/O occurs.

 

Couchdb runs nicely on odroids even with the limited memory, though I don't have huge databases; nonetheless, I am going to test sharding (v2+ Couch) across the three HC1's which act as a test cluster.  I have a 2T Mongo DB on an intel box that is horribly abused and not used in the right way.  It is dog slow because of poor design.  I expect better performance backing it with odroids with couch for the specific application.  I"ll use far less power over the ancient monster box 48 cores 128 GB ram with a small bank of odroids.  Power is a problem in that we really need two larger redundant power supplies with barrel connectors (common ground) than a bunch of individual power supplies.  The power selection at HK isn't ready for enterprise, but for a test cluster on dev boards it can't be beat.  Maybe the final solution will move to AMD Opteron Arm based boards which can come in enterprise level configurations.

 

By the way, I have used an old U3 as my desktop for several years.  I ssh to the big linux compute servers so why do I need anything else? Yes, office -- I RDP to a windows server for that. 

 

Side note:  the HK 4.9.y kernel is 4.9.44.  Kernel.org updated 4.9.y to 4.9.45 a few days ago.  You can get a patch to apply from kernel.org by:  https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/rawdiff/?id=v4.9.45&id2=v4.9.44&dt=0

 

Then cd to your root kernel build directory pulled from hardkernel github and switched to the right branch and apply the patch.

patch --strip 1 < patch_file

 

Enjoy these little hotrods.

 

 

uDude.

Share this post


Link to post
Share on other sites
1 minute ago, uDude said:

Side note:  the HK 4.9.y kernel is 4.9.44.  Kernel.org updated 4.9.y to 4.9.45 a few days ago.  You can get a patch to apply from kernel.org by:  https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/rawdiff/?id=v4.9.45&id2=v4.9.44&dt=0

 

It's 4.9.47 already and we have 4.9.46 in the build system and beta repository (since .47 was released less than a day ago)

Share this post


Link to post
Share on other sites
15 hours ago, uDude said:

 

Guess I'm moving to 4.9.47 today, too.

 

 

HK's kernel is in sync since 10 hours: https://github.com/hardkernel/linux/commits/odroidxu4-4.9.y:)

 

BTW: with my EVO840 I get 240MB/s write on both XU4 and HC1 (with an UAS capable USB-to-SATA bridge of course! JMS567, JMS578 and ASM1153 successfully tested). If performance is that low as the 120MB/s you reported I would check 'lsusb -t' output whether you're using UAS or not and I would also check dmesg output for underpowering symptoms.

 

I prepared a test with the fastest USB3 thingie we currently support (ROCK64) where we usually get +380 MB/s with fast SSDs

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4      698     2538     1457     1577     1052    30803
          102400      16     2732    84156     2697     3284    74365      572
          102400     512   255254   265324   270608   279494   271598     1382
          102400    1024   274236     1377   288145   296826   285552     1387
          102400   16384   298959   298371   318954   326778   326171     1000

My test setup was an external VIA812B hub with an integrated RTL8153 Ethernet chip, to one hub port a JMS578 with EVO840 connected to another port a JMS578 with a Transcend TS120GMTS420 connected) and when I started the usual iozone benchmark underpowering (this time I would suspect undercurrent and not undervoltage as it's usual with XU4) happened. Problems/benchmark started at '239.752135': http://sprunge.us/JOMC (these are no UAS problems but just the usual symptoms of underpowering with loaded uas driver)

Share this post


Link to post
Share on other sites
5 hours ago, tkaiser said:

this time I would suspect undercurrent and not undervoltage as it's usual with XU4

 

The barrel jack on the Rock64 is tiny, I've not found one that size rated more than 1.5 Amps. (Not saying they don't exist, only that it is very small) I'll do some testing to see voltage drops/current consumption on mine and put it in the proper thread.  I'm doing the same with "Le Potato", I may begin a matrix.  

 

Was the drive powering through the USB?

Share this post


Link to post
Share on other sites
56 minutes ago, TonyMac32 said:

Was the drive powering through the USB?

 

Of course. And to try to remain on-topic here: this was just meant as an example what to check on HC1 if a connected EVO840 SSD shows dog slow performance (on HC1 it can only be related to underpowering or using the old crappy 3.x kernel not being UAS capable, on XU4 it's always possible to choose a crappy USB-to-SATA bridge too).

 

I'm preparing a tutorial like post about RAID on SBC within the next weeks where these issues (underpowering affecting performance and realibility) are covered. But I was surprised that my very first test already sucked that much.

Share this post


Link to post
Share on other sites

I've learned assumptions are always bad practice, so I thought I'd ask.

 

Is there rrally any reason to use the older kernel?

 

2 hours ago, tkaiser said:

But I was surprised that my very first test already sucked that much.

 

Yeah, that's not the best start.

Share this post


Link to post
Share on other sites
On 23.08.2017 at 9:50 AM, tkaiser said:

In my tests I found it a bit surprising that position of the heatsink fins doesn't matter that much (one would think with heatsink fins on top as shown on the right above reported temperatures would be a lot lower compared to the left position but that's not the case -- maybe 1°C difference -- which is also good news since the HC1 is stackable). The whole enclosure gets warm so heat disspation works really efficient here and when running some demanding benchmarks on HC1 and XU4 in parallel HC1 always performed better since throttling started later and was not that aggressive.

You can check the temperature, if you put the case upright ?

Share this post


Link to post
Share on other sites
14 hours ago, TonyMac32 said:

Is there really any reason to use the older kernel?

 

At least not for HC1 use cases. Usually those older 'vendor' kernels are the only ones with support for advanced graphics stuff (HW accelerated video being the best example) but this shouldn't matter on a headless device. If I remember correctly with the Exynos 5422 we're dealing here it's also possible to make use of this stuff with mainline or the 4.9LTS kernel we use (I read about HW accelerated transcoding on XU4 with emby some weeks ago). Anyway: the most important feature of HC1 is fast storage access which means that 'USB Attached SCSI' (UAS) is a must (twice the performance with SSDs, not that huge gains with HDDs) so forget about the old kernel for this use case anyway since UAS started to work on Exynos 5422 IIRC with kernel 4.8.

 

2 hours ago, balbes150 said:

You can check the temperature, if you put the case upright ?

 

Sure. Ambient temperature is 22°C. Below 'armbianmonitor -m' output using the SoC's internal thermal sensor:

  • it starts with the HC1 being idle 5 minutes after booting. The heatsink is still cold and therefore SoC temperature in idle is just ~15"C above ambient. After some time +20"C above ambient are more realistic
  • then I fired up a 'NAS test' LanTest with connected SSD, that's already 'worst case' wrt NAS workloads. Temperature gets up to 58"C but after benchmark finished decreases rather quick
  • then I fired up 'minerd --benchmark' (cpuminer 2.4.5 making heavy use of NEON instructions), the Exynos immediately throttles down to 1.8 GHz, after some time it's then 1.5GHz/1.3GHz with SoC temp close to 90°C and when the benchmark stopped it takes some time for temperatures to lower since now the giant heatsink dissipates heat back into the SoC.
13:04:17:  600/ 600MHz  0.27   6%   1%   4%   0%   0%   0% 35.0°C
13:04:22:  600/ 600MHz  0.25   6%   1%   4%   0%   0%   0% 38.0°C
Time       big.LITTLE   load %cpu %sys %usr %nice %io %irq   CPU
13:04:27: 2000/ 600MHz  0.23   6%   1%   4%   0%   0%   0% 40.0°C
13:04:32: 2000/ 600MHz  0.45   6%   1%   4%   0%   0%   0% 44.0°C
13:04:37: 2000/ 600MHz  0.49   6%   1%   4%   0%   0%   0% 47.0°C
13:04:42: 2000/ 600MHz  0.45   6%   1%   4%   0%   0%   0% 50.0°C
13:04:47: 2000/ 600MHz  0.50   6%   1%   4%   0%   0%   0% 51.0°C
13:04:52: 2000/ 600MHz  0.54   6%   1%   4%   0%   0%   0% 53.0°C
13:04:58: 2000/ 600MHz  0.58   6%   1%   4%   0%   0%   0% 53.0°C
...
13:08:47: 2000/ 600MHz  0.80   7%   2%   3%   0%   0%   1% 55.0°C
13:08:52: 2000/ 600MHz  0.90   7%   2%   3%   0%   0%   1% 57.0°C
13:08:57: 2000/ 600MHz  0.91   7%   2%   3%   0%   0%   1% 58.0°C
13:09:02: 2000/ 600MHz  0.83   7%   2%   3%   0%   0%   1% 49.0°C
13:09:08: 2000/ 600MHz  0.77   7%   2%   3%   0%   0%   1% 48.0°C
13:09:13: 2000/ 600MHz  0.70   7%   2%   3%   0%   0%   1% 48.0°C
13:09:18:  600/ 600MHz  0.65   7%   2%   3%   0%   0%   1% 44.0°C
13:09:23:  600/ 600MHz  0.68   7%   2%   3%   0%   0%   1% 41.0°C
13:09:28:  600/ 600MHz  0.62   7%   2%   3%   0%   0%   1% 40.0°C
...
13:12:02:  600/ 600MHz  0.60   6%   2%   3%   0%   0%   1% 42.0°C
13:12:07: 1800/1400MHz  1.20   7%   2%   3%   0%   0%   1% 77.0°C
13:12:12: 1700/1400MHz  1.74   7%   2%   3%   0%   0%   1% 79.0°C
13:12:18: 1700/1400MHz  2.24   8%   2%   4%   0%   0%   1% 80.0°C
13:12:23: 1700/1400MHz  2.70   8%   2%   4%   0%   0%   1% 81.0°C
13:12:29: 1600/1400MHz  3.21   9%   2%   5%   0%   0%   1% 81.0°C
...
13:22:40: 1500/1300MHz  9.24  42%   1%  39%   0%   0%   0% 87.0°C
13:22:45: 1500/1300MHz  9.38  42%   1%  39%   0%   0%   0% 84.0°C
13:22:51: 1500/1300MHz  9.35  42%   1%  39%   0%   0%   0% 76.0°C
13:22:56: 2000/ 600MHz  8.68  42%   1%  39%   0%   0%   0% 70.0°C
13:23:01: 2000/1400MHz  8.06  42%   1%  39%   0%   0%   0% 69.0°C
13:23:06: 2000/ 600MHz  7.50  42%   1%  39%   0%   0%   0% 66.0°C
13:23:11:  600/ 600MHz  7.06  42%   1%  39%   0%   0%   0% 58.0°C
13:23:16:  600/ 600MHz  6.49  41%   1%  39%   0%   0%   0% 53.0°C
13:23:21:  600/ 600MHz  5.97  41%   1%  39%   0%   0%   0% 52.0°C
13:23:26:  600/ 600MHz  5.49  41%   1%  39%   0%   0%   0% 53.0°C
13:23:32:  600/ 600MHz  5.05  41%   1%  38%   0%   0%   0% 50.0°C
Time       big.LITTLE   load %cpu %sys %usr %nice %io %irq   CPU
13:23:37:  600/ 600MHz  4.65  41%   1%  38%   0%   0%   0% 51.0°C
13:23:42:  600/ 600MHz  4.28  41%   1%  38%   0%   0%   0% 49.0°C
13:23:47:  600/ 600MHz  3.93  41%   1%  38%   0%   0%   0% 49.0°C
13:23:52:  600/ 600MHz  3.62  41%   1%  38%   0%   0%   0% 49.0°C
13:23:57:  600/ 600MHz  3.49  40%   1%  38%   0%   0%   0% 50.0°C
13:24:02:  600/ 600MHz  2.95  40%   1%  38%   0%   0%   0% 51.0°C
13:24:08:  600/ 600MHz  2.87  40%   1%  38%   0%   0%   0% 50.0°C
13:24:13:  600/ 600MHz  2.64  40%   1%  38%   0%   0%   0% 49.0°C
13:24:18:  600/ 600MHz  2.43  40%   1%  37%   0%   0%   0% 48.0°C
13:24:23:  600/ 600MHz  2.40  40%   1%  37%   0%   0%   0% 48.0°C
13:24:28:  600/ 600MHz  2.21  40%   1%  37%   0%   0%   0% 48.0°C
13:24:33:  600/ 600MHz  2.03  40%   1%  37%   0%   0%   0% 47.0°C
13:24:38:  600/ 600MHz  1.87  40%   1%  37%   0%   0%   0% 47.0°C
13:24:44:  600/ 600MHz  1.72  39%   1%  37%   0%   0%   0% 49.0°C
13:24:49:  600/ 600MHz  1.74  39%   1%  37%   0%   0%   0% 47.0°C

Armbianmonitor -u output: http://sprunge.us/HjEg

balbes150 likes this

Share this post


Link to post
Share on other sites
14 hours ago, bzfrp said:

 

So disks behind the JMS578 on HC1 are reported to spin down after 3 minutes of inactivity. I've not noticed this before, just booted latest OMV image on HC1 with a 500GB disk. Still not an issue (SMART is not enabled in OMV). On the other hand the disk activity led is constantly blinking which seems wrong to me.

 

If I would come accross this issue with spinning rust then most probably I would either let a cronjob do a 'touch/sync' every 2 minutes or even better install smartmontools, configure smartd and let it run with '-i 179' (querying the disk every 179 seconds and thereby preventing it from spinning down while getting some health data logged or even notifications if the disk shows critical SMART values).

 

Edit: For whatever reasons the 'disk activity led' stopped blinking ~20 minutes after I mounted the fs on the disk, then the disk stopped spinning shortly after :) 

 

That means I can also confirm. And there's something wrong since 'hdparm -C /dev/sda' to check the state of the disk (idle|active|standby|sleeping) wakes up the disk (shouldn't do this) and produces this output:

/dev/sda:
SG_IO: bad/missing sense data, sb[]:  70 00 01 00 00 00 00 0a 00 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 drive state is:  unknown

Anyway: Mounting the disk in OMV and then using this

Bildschirmfoto%202017-10-10%20um%2011.18

ends up with that keeping the disk spinning

root@odroidxu4:~# ps auxww | grep smart
root      4149  0.2  0.1   3924  2380 ?        Ss   11:17   0:00 /usr/sbin/smartd -n --quit=never --interval=179

Edit 2: Nope, doesn't work either while touch+sync do the job. Kinda annoying :(  Seems like Hardkernel isn't that lucky combining JMicron ICs with JMicron firmware...

 

Edit 3: Since I've another JMS578 lying around I used same HC1 with same HDD this time connected to USB2 port using the ROCK64 SATA cable. Same behaviour though, hdparm -C throws same error and disks spins down after short time. Here 'hdparm -I' output for onboard JMS578 and for external (only difference is 'Advanced power management level': 254 vs. 128)

Share this post


Link to post
Share on other sites
9 hours ago, TonyMac32 said:

I have that issue with USB--> SATA on my XU4, I have an older JMicron USA Technology Corp. JMS539/567 SuperSpeed SATA II/III 3.0G/6.0G Bridge

 

Works for me:

echo '/srv/dev-disk-by-id-usb-JMicron_USB_to_SATA_bridge_DB00000000013B-0-0-part1/.keep-spinning' >/etc/my-drive
echo '* * * * * root [ -f /etc/my-drive ] && read FileToTouch </etc/my-drive && touch "${FileToTouch}" && sync >/dev/null 2>&1' >/etc/cron.d/keep-sda-spinning
chmod 600 /etc/cron.d/keep-sda-spinning

 

Edit: Just thought about how to prevent spinning down after 3 min while allowing the disk to spin down after eg. 15 min inactivity? Maybe using any of the available scripts to deal with crappy USB disk enclosures and then partially inverting the logic (using kernel diskstats with a scan interval of 120 seconds, then either doing a smartctl -a -d sat ('-d sat' required with smartmontools versions prior to 6.6) or not. Hmm... getting standby/sleep state would also be challenging since JMS578 wakes the connected disk up when queried with 'hdparm -C')

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

3 3

  • Support the project

    We need your help to stay focused on the project.

    Choose the amount and currency you would like to donate in below.