gprovost

Members
  • Content Count

    145
  • Joined

  • Last visited


Reputation Activity

  1. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    What you say concern people mounting on a 64bit Kernel a File System created on 32 bit Kernel.
     
    This is not the issue we are talking about. Our limitation is that simply you cannot address a File System > 16TB with a 32bit Kernel because of page cache limitation. I'm not aware of possible work around even with other File System than EXT.
     
    The only solution is to use a FUSE union file system to merge two or more partitions into one big volume.
    MergerFS : https://github.com/trapexit/mergerfs (It's available as a Debian package.)
     
    @9a3eedi @fpabernard You guys don't want to give a try to MergerFS ?
     
     
     
  2. Like
    gprovost got a reaction from djurny in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    Just for info I updated the libssl1.0.2 cryptodev patch to work with latest libssl1.0.2 debian deb (libssl1.0.2_1.0.2r)
     
    https://wiki.kobol.io/cesa/#network-application-encryption-acceleration
     
    So if some people wanna test offload crypto operation while using apache2 or ssh for example ;-) You can also find a pre-build dpkg here if you want to skip the compile steps.
  3. Like
    gprovost reacted to jimandroidpc in Helios4 Support   
    I checked a new encryption and it still says XTS - so from what i can tell its not applied. I may just wait, thanks for looking into it - if I can help id be happy to but im learning as I go
  4. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    Hahah cool to hear that. The idea of Helios4 was to fulfill one specific scope : NAS. There are a lot of awesome boards out there but often they are trying to do too many things.
    For instance if we do a refresh of Helios4 with a new SoC that has some display output, I'm pretty sure we would decide to not expose it... in order to stick to a pure headless NAS server concept.
    Well we hope we will carry on this path of "disruptive" DIY NAS solution with future project, but it's not tomorrow that we will steal the market share of entry level proprietary NAS
  5. Like
    gprovost reacted to djurny in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    L.S.,
    A quick update on the anecdotal performance of LUKS2 over LUKS;
     
    md5sum'ing ~1 TiB of datafiles on LUKS:
    avg-cpu: %user %nice %system %iowait %steal %idle 0.15 20.01 59.48 2.60 0.00 17.76 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sdb 339.20 84.80 0.00 848 0 dm-2 339.20 84.80 0.00 848 0 md5sum'ing ~1 TiB of datafiles on LUKS2:
    avg-cpu: %user %nice %system %iowait %steal %idle 0.05 32.37 36.32 0.75 0.00 30.52 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sdd 532.70 133.18 0.00 1331 0 dm-0 532.80 133.20 0.00 1332 0 sdb:
    sdb1 optimally aligned using parted. LUKS(1) w/aes-cbc-essiv:256, w/256 bit key size. XFS with 4096 Bytes sector size. xfs_fsr'd regularly, negligible file fragmentation.  sdd:
    sdd1 optimally aligned using parted. LUKS2 w/aes-cbc-essiv:256, w/256 bit key size and w/4096 Bytes sector size, as @matzman666 suggested. XFS with 4096 Bytes sector size.  
    Content-wise: sdd1 is a file-based copy of sdb1 (about to wrap up the migration from LUKS(1) to LUKS2).
     
    Overall a very nice improvement!
     
    Groetjes,
     
    p.s. Not sure if it added to the performance, but I also spread out the IRQ assignments over both CPUs, making sure that each CESA and XOR engine have their own CPU. Originally I saw that all IRQs are handled by the one and same CPU. For reasons yet unclear, irqbalance refused to dynamically reallocate the IRQs over the CPUs. Perhaps the algorithm used by irqbalance does not apply well to ARM or the Armada SoC (- initial assumption is something with cpumask being reported as '00000002' causing irqbalance to only balance on- and to CPU1?).
  6. Like
    gprovost reacted to count-doku in Helios4 Support   
    One other thing to check if the disks are faulty / just doing the clicking -> connect them to some computer. Listen there.
     
     
  7. Like
    gprovost reacted to tuxd3v in Helios4 Support   
    Hello,
    Here is the Link for Seagate SeaChest Utilities Documentation:
    Seagate SeaChest Utilities
     
    In case any one has Seagate ironwolfs or another Seagate disks.
     
    I have been kidding around with this tools..
    With this base settings:
    # Enable Power UP in Standby # Its a persistent feature # No Power Cycle required ${SEA_PATH}/openSeaChest_Configure --device /dev/sda --puisFeature enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --puisFeature enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --puisFeature enable # Enable Low Current SpinUP # Its a persistent Feature # Requires a Power Cycle ${SEA_PATH}/openSeaChest_Configure --device /dev/sda --lowCurrentSpinup enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --lowCurrentSpinup enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --lowCurrentSpinup enable Unfortunatly FreeFall feature is NOT supported at least in my IronWolfs 4TB,
    Which would be a must to control aceleration levels ...( I already  killed a WD Red 3TB..with a freefall .. of 1 meter os so..)
    This Feature could be present but doesn´t permit changing levels..at least it seems so..
    root@helios4:~# openSeaChest_Configure --device /dev/sda --freeFall info|grep -E "dev|Free" /dev/sg0 - ST4000VN008-XXXXXX - XXXXXXXX - ATA Free Fall control feature is not supported on this device. Could it be related with the last release of SeaChest tools? I don´ t know..
    If someone knows how to tune acceleration levels, I appreciate..
     
    In the mean time,
    I made some basic tests with Models Seagate IronWolf 4TB ST4000VN008( 3 discs):
    APM Mode Vs Power Comsuption( Power Measured on the Wall...it includes helios4 power consumption )
     
    1) With CPU iddle at 800Mhz, and with disks at minimun APM power mode 1 .
        Power consumption is between 4.5-7 Watts, maybe average is around 6-6.5 Watts
        1.1) When Issuing a 'hdparm -Tt' on all disks,
               Power peaks at 30 Watts Max, but stays in  average at around 15-25 Watts,
        1.2) Then goes down to arround 6 - 6.5 Watts.. mode 1
     
    2) With CPU iddle at 800Mhz, and with disks at APM power mode 127,
        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 127 )
        2.1) When issuing a 'hdparm -Tt'  on all disks,
               Power peaks at 34 Watts, But it is around 13-25 Watts, with majority of time between 17-23 Watts..
        2.2) Then it enters in mode 1.
     
    3) With CPU iddle at 800Mhz, and with disks at APM power mode 128,
        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 128 )
        3.1) When issuing a 'hdparm -Tt'  on all disks,
              Power peaks at 34 Watts, but it is around 13-25 watts, with majority of time between 15-23 watts..
        3.2) Then goes down to around 15-17 Watts, for some time, ..majority of time ~ 16.5 Watts..
              I haven´ t checked its final state has only monitored by some 10-15 minutes
     
    4) With CPU iddle at 800Mhz, and with disks at APM power mode 254( max performance )
        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 254 )
        4.1) When issuing a 'hdparm -Tt'  on all disks,
               Power peaks at ~34 Watts, but it is around 13-28 watts, with majority of time between 17-23 watts..
        4.2) Then bounces between 13-20 Watts, for some time, maybe with average around 19Watts.. 
               I haven´ t checked its final state has only monitored by some 10-15 minutes
               I assume they enter in mode 128..
     
    In my Opinion power mode 127 'its a best fit' for power consumption vs performance..
    It penalize a bit the performance, but an in all, it seems the best fit, with the bonus of ~ 6.5Watts in Idle.
     
    So I issue on reboot or start:
    -- Enable APM level to 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sda --setAPMLevel 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdb --setAPMLevel 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdc --setAPMLevel 127  
  8. Like
    gprovost reacted to Harvey in Helios4 Support   
    I have updated my lcd4linux.conf for a cheap AX206 picture frame (320x240) connected to my Helios4.
    If someone is interested: https://github.com/hkramski/my-lcd4linux-conf.
     
     
  9. Like
    gprovost reacted to Igor in Helios4 Support   
    Do that - congrat!  I think we have the problem under control now.
  10. Like
    gprovost reacted to Igor in Helios4 Support   
    Can check in a day or two. Now have family/life troubles. Images are done:
    https://dl.armbian.com/helios4/Debian_stretch_next.7z.torrent
    https://dl.armbian.com/helios4/Ubuntu_bionic_next.7z.torrent
  11. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    Let me explain a bit more the issue that was experienced by @integros @lanefu and could happen to anyone that did a fresh install using release 5.67 and then attempting a system upgrade.
     
    This boot issue happens because new u-boot package (5.70) doesn't anymore update automatically the u-boot bootloader binary on target however bsp package (aka linux-****-root-next-helios4_5.**_armhf.deb) still update the bootscript. Therefore it will create an incompatibility case between bootloader and bootscript that we experience already in the past following upgrade form u-boot 2013 to u-boot 2018. We did address this incompatible use case previously but we didn't foresee it would happen again because of that recent decision that u-boot package shouldn't anymore be automatically updated.
     
    We unfortunately created a tricky situation and we need to find a solution that works for everyone,  that includes people running release 5.67, they should be able to upgrade without being aware of this issue. In the meantime :
     
    To everybody, when you do a fresh install please don't use version 5.67 on Armbian website, but use our latest image build (5.68) that is available on our wiki here until a new image build is done by Armbian.
     
  12. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    Sorry guys for late reply. I'm getting soon married and I'm pretty overloaded between Helios4 delivery follow-up and taking care of this big personal event. So please bare the latency in the coming days. Everything will be back to normal in February.
     
    @gabest To be honest we haven't done much experiment with ZFS but we have couple of people that reported good feedback and performance with ZoL on Helios4. Most of them where using mirror vdev instead of raidz mode, you can find plenty of discussion on this topic but it's an important factor in the case of Helios4. The other important factor for good ZFS perf on Helios4 is deduplication. Actually the dedup mode is the main reason why ZFS need so much memory, so with 2GB in Helios4, you need to disable dedup if you want good perf.
    In regards to ZoL maturity / stability on 32bit system, I don't have much insight on this. I just know that starting v0.7.0 some improvement were made for 32-bit stability. So for this reason it is recommended to use ZFS from stretch-backports (https://packages.debian.org/stretch-backports/zfsutils-linux)
     
    @djurny Actually we modified fancontrol on purpose in order to set the fan speed to 0 when fancontrol exit. This was to address the use case when someone power down the system, in this case you don't want the fan to go full speed. But after the message from Harvey and my response below, I think there is a bit of contraction in our logic. Let me think about this and we might just revert the fancontrol to its original behavior.
     
    @Harvey Helios4 Power Management is rather simple since it was designed for a system that is running 24/7 (or in IDLE/DEEP IDLE state if you want to use Wake-on-LAN).  We didn't include a PMIC in our design to address this specific use case of powered off system. When you halt your system, the Helios4 SoC power domain will remain ON and since there is no more OS running so there no more CPU Dynamic Frequency Scaling (DFS), so my guess is the SoC is running at its highest clock when system is halted compared to idle. This would explain the difference between in power consumption System IDLE and System Powered-Off. However we will need to double check that.
     
    @djurny Humm very good point. When I was doing benchmark during the early stage of the project, it didn't get to my mind to check the /proc/interrupts. Only later when working on the CESA engine I figured out checking the interrupts was the way to check if engines were used to offload operations. It completely slipped my mind to do the same check again for XOR engines. Well thanks to you, I can see my early assumption was wrong. We will need to investigate how to force system to use the MV_XOR and how it would improve performance and/or system load.
  13. Like
    gprovost reacted to djurny in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    @matzman666 Sorry, no measurements. From memory the numbers for raw read performance were way above 100MB/sec according to 'hdparm -t'. Currently my box is live, so no more testing with empty FSes on unencrypted devices for now. Perhaps someone else can help out?
     
    -[ edit ]-
    So my 2nd box is alive. The setup is slightly different and not yet complete.
     
    I quickly built cryptsetup 2.x from sources on Armbian, was not as tough as I expected - pretty straightforward: configure, correct & configure, correct & ...
     
    cryptsetup 2.x requires the following packages to be installed:
    uuid-dev libdevmapper-dev libpopt-dev pkg-config libgcrypt-dev libblkid-dev  
    Not sure about these ones, but I installed them anyway:
    libdmraid-dev libjson-c-dev  
    Build & install:
    Download cryptsetup 2.x via https://gitlab.com/cryptsetup/cryptsetup/tree/master.  Unxz the tarball. ./configure --prefix=/usr/local make sudo make install sudo ldconfig  
    Configuration:
    2x WD Blue 2TB HDDs. 4KiB sector aligned GPT partitions. mdadm RAID6 (degraded).  
    Test:
    Write 10GiB 4GiB worth of zeroes; dd if=/dev/zero of=[dut] bs=4096 count=1048576 conv=fsync. directly to mdadm device. to a file on an XFS filesystem on top of an mdadm device. directly to LUKS2 device on top of an mdadm device (512B/4096KiB sector sizes for LUKS2). to a file on an XFS filesystem on top of a LUKS2 device on top of an mdadm device (512B/4096KiB sector sizes for LUKS2).  
    Results: 


    Caveat:
    CPU load is high: >75% due to mdadm using CPU for parity calculations. If using the box as just a fileserver for a handful of clients, this should be no problem. But if more processing is done besides serving up files, e.g. transcoding, (desktop) applications, this might become problematic. RAID6 under test was in degraded mode. I don't have enough disks to have a fully functional RAID6 array yet. No time to tear down my old server yet. Having a full RAID6 array might impact parity calculations and add 2x more disk I/O to the mix.  
    I might consider re-encrypting the disks on my first box, to see if LUKS2 w/4KiB sectors will increase the SnapRAID performance over the LUKS(1) w/512B sectors. Currently it takes over 13 hours to scrub 50% on a 2-parity SnapRAID configuration holding less than 4TB of data.
     
    -[ update ]-
    Additional test:
    Write/read 4GiB worth of zeroes to a file on an XFS filesystem test on armbian/linux packages 5.73 (upgraded from 5.70) for i in {0..9} ;
    do time dd if=/dev/zero of=removeme.${i} bs=4096 count=$(( 4 * 1024 * 1024 * 1024 / 4096 )) conv=fsync;
    dd if=removeme.$(( 9 - ${i} )) of=/dev/null bs=4096 ;
    done 2>&1 | egrep '.*bytes.*copied.*'  
    Results:

     
    The write throughput appears to be slightly higher, as now the XOR HW engine is being used - but it could just as well be measurement noise.
    CPU load is still quite high during this new test:
    %Cpu0 : 0.0 us, 97.5 sy, 1.9 ni, 0.3 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.3 us, 91.4 sy, 1.1 ni, 0.0 id, 0.0 wa, 0.0 hi, 7.2 si, 0.0 st <snip> 176 root      20   0       0      0      0 R  60.9  0.0  14:59.42 md0_raid6                                                     1807 root      30  10    1392    376    328 R  45.7  0.0   0:10.97 dd 9087 root       0 -20       0      0      0 D  28.5  0.0   1:16.53 kworker/u5:1 34 root       0 -20       0      0      0 D  19.0  0.0   0:53.93 kworker/u5:0                                                       149 root     -51   0       0      0      0 S  14.4  0.0   3:19.35 irq/38-f1090000                                                 150 root     -51   0       0      0      0 S  13.6  0.0   3:12.89 irq/39-f1090000                                                5567 root      20   0       0      0      0 S   9.5  0.0   1:09.40 dmcrypt_write <snip> Will update this again once the RAID6 array set up is complete.
     
    Groetjes,
     
     
  14. Like
    gprovost reacted to hatschi1000 in Helios4 Support   
    Talking about anecdotes: Some of you might remember the issue of booting my helios4 with 2 HGST Deskstar HDDs I had approx. half a year ago (https://forum.armbian.com/topic/6033-helios4-support/?do=findComment&comment=57981).

    After we weren't able to find a solution on why the specific problem appeared here on the forum, I got in contact with @gprovost directly and after some back and forth messaging he kindly asked for the defective board to be sent back to Singapore. Soon after I received a fixed board back with which the problem did not appear anymore.

    Right now I'm still tinkering around my setup (2x2TB HDD in btrfs-RAID1 underneath OMV4 with (at some point at least) an automated offsite backup using btrfs-snapshots), without having had any significant problems - a big thumbs up to @gprovost and the entire helios team for the straightforward communication and flawless exchange of the faulty carrier board.
  15. Like
    gprovost reacted to lanefu in Helios4 Support   
    just an anecdote.    I received my Helios4 (from second production run) on Christmas eve.. what a present!

    Anyway I've got it up and running with just the stable armbian stretch image with OMV running raid 5 on 3x3TB enterprise drives and the thing is just a work of art.   I wish we'd get more SBCs on the market like this thing... not everyone needs a TV box.
  16. Like
    gprovost reacted to Caribou in Helios 4 connection TV   
    Hello @gprovost Thanks for your answer. Then, for me It will be a raspberry with raspian + kodi.
     
     
    Strongly the 3rd batch
  17. Like
    gprovost got a reaction from JakeK in Helios4 Support   
    We have been investigating the issue of the network interface that is sometimes not properly initialized (as reported by @JakeK) and we found the issue. During u-boot board initialization, the network PHY is supposed to be reset by toggling a GPIO (GPIO19). Unfortunately in our u-boot implementation the PHY reset call was happening too early when the SoC pin muxing wasn't completed yet, which means the GPIO pull up and pull down wasn't physically happening.
     
    We have added the fix to our u-boot repo  : https://github.com/helios-4/u-boot-marvell/commit/15c179624b28ddab7d212a0ef0571bcec91cf2ed
     
    @Igor Any chance you can trigger a build of just Helios4 u-boot and publish the u-boot .deb in the armbian repo ? This way everyone can easily get the fix by doing an upgrade. Thanks.
  18. Like
    gprovost got a reaction from tkaiser in Benchmarking CPUs   
    @tkaiser Here the link of my latest benchmark of Helios4 : http://ix.io/1jCy
     
    I get the following result for OpenSSL speed
    OpenSSL results: type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-128-cbc 1280.56k 5053.40k 18249.13k 52605.27k 102288.04k 109390.51k aes-128-cbc 1285.51k 5030.68k 18256.13k 53001.90k 100128.09k 109188.44k aes-192-cbc 1276.82k 4959.19k 18082.22k 51421.53k 96897.71k 103093.59k aes-192-cbc 1290.35k 4961.09k 17777.24k 51629.74k 95647.06k 102596.61k aes-256-cbc 1292.07k 5037.99k 17762.90k 50542.25k 92782.59k 98298.54k aes-256-cbc 1281.35k 5050.94k 17874.77k 49915.90k 93164.89k 98822.83k  
    In order to leverage on hw crypto engine, I had no choice but to use OpenSSL 1.1.1 lib (openssl-1.1.1-pre8) and I decided to use cryptodev-linux instead of AF_ALG since it gives me slightly better result (+5-10%).
     
    Here a bit of findings regarding OpenSSL engines implementation :
     
    As stated in the changelog
    Changes between 1.0.2h and 1.1.0 [25 Aug 2016] *) Added the AFALG engine. This is an async capable engine which is able to offload work to the Linux kernel. In this initial version it only supports AES128-CBC. The kernel must be version 4.1.0 or greater. [Catriona Lucey] So using Debian Stretch package OpenSSL 1.1.0f, or any more recent 1.1.0 version, the only cipher supported by AFALG engine was effectively AES-128-CBC
    $> openssl engine -c (dynamic) Dynamic engine loading support (afalg) AFALG engine support [AES-128-CBC]  
    Starting OpenSSL 1.1.1, even though it is not mentioned anywhere in the changelog, AES192-CBC and AES256-CBC is supported by the AFALG engine
    $> openssl engine -c (dynamic) Dynamic engine loading support (afalg) AFALG engine support [AES-128-CBC, AES-192-CBC, AES-256-CBC]  
    But one thing much more exiting about OpenSSL 1.1.1 is the following
    Changes between 1.1.0h and 1.1.1 [xx XXX xxxx] *) Add devcrypto engine. This has been implemented against cryptodev-linux, then adjusted to work on FreeBSD 8.4 as well. Enable by configuring with 'enable-devcryptoeng'. This is done by default on BSD implementations, as cryptodev.h is assumed to exist on all of them. [Richard Levitte] So now with the 1.1.1 is pretty straight forward to use cryptodev, no need to patch or configure anything in openssl, openssl will detect automatically if module cryptodev is loaded and will offload crypto operation on it if presents.
    $> openssl engine -c (devcrypto) /dev/crypto engine [DES-CBC, DES-EDE3-CBC, BF-CBC, AES-128-CBC, AES-192-CBC, AES-256-CBC, AES-128-CTR, AES-192-CTR, AES-256-CTR, AES-128-ECB, AES-192-ECB, AES-256-ECB, CAMELLIA-128-CBC, CAMELLIA-192-CBC, CAMELLIA-256-CBC, MD5, SHA1] (dynamic) Dynamic engine loading support  
    Based on those info, and making the assumption that sooner than later openssl 1.1.1 will be available in Debian Stretch (via backports most probably), i think the best approach to add openssl crypto engine support in Armbian is via the cryptodev approach. This way we can support all the ciphers now. I will look how to patch properly dpkg openssl_1.1.0f-3+deb9u2 to activate cryptodev supports. @zador.blood.stained maybe you have a different option on the topic ?
     
     
  19. Like
    gprovost got a reaction from tkaiser in Benchmarking CPUs   
    @zador.blood.stained I think there isn't any distro OpenSSL packages that is built with hardware engine support.
    Also, even if engine is installed, OpenSSL doesn't use any engine by default, you need to configure it in openssl.cnf.
    But you right about cryptsetup (dm-crypt), it uses AF_ALG by default. I was wondering why so much delta between my 'cryptsetup benchmark' and 'openssl speed' test on Helios4.
     
    I just did a test by compiling openssl-1.1.1-pre8 with the AF_ALG (... enable-engine enable-afalgeng ...) and here are the benchmark result on Helios4 :
     
    $> openssl speed -evp aes-xxx-cbc -engine afalg -elapsed
    type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-128-cbc 745.71k 3018.47k 11270.23k 36220.25k 90355.03k 101094.74k aes-256-cbc 739.49k 2964.93k 11085.23k 34178.05k 82597.21k 90461.53k  
    The difference is quite interesting, with AF_ALG it performs much better on bigger block size, but poorly on very small block size.
     
    $> openssl speed -evp aes-xxx-cbc -elapsed
    type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes aes-128-cbc 44795.07k 55274.84k 59076.27k 59920.04k 59719.68k 59353.77k aes-256-cbc 34264.93k 40524.42k 42168.92k 42496.68k 42535.59k 42500.10k  
    System : Linux helios4 4.14.57-mvebu #2 SMP Tue Jul 24 08:29:55 UTC 2018 armv7l GNU/Linux
     
    Supposedly you can even have better perf with cryptodev, but I think Crypto API AF_ALK is more elegant and easier to setup.
     
    Once I have a cleaner install of this AF_ALK (or cryptodev), I will run sbc_bench and send you ( @tkaiser ) the output.
  20. Like
    gprovost reacted to Igor in Helios4 Second Campaign is ON !   
    And updated images with many bugfixes.
     
     
  21. Like
    gprovost got a reaction from iNTiGOD in Helios4 Support   
    @iNTiGOD which build / image version are running ? 
    Maybe you can refer to the following blog post to check if the files on your system show to correct settings.
  22. Like
    gprovost got a reaction from chwe in Helios4 Support   
    @nemo19 Ok no need to look any further, you found the issue. There should be a thermal pad between the CPU and the Heatsink as shown below. Without the thermal pad no proper heat transfer can happen, therefore the CPU might have reached above Maximum Junction Temperature (115C) resulting by it to get unstable and crash. I'm really sorry about this missing thermal pad, this should definitively not have happened, I will report / complain to the company that handled the board assembly for us.
     
    FYI the thermal pad dimension we are using is 20x20x1mm.
     
    Please provide me by private message your complete shipping address. I will send you this missing thermal pad. In the meantime you can try using thermal paste, even though the gap between CPU and Heatsink is a bit too big for thermal paste.
     

  23. Like
    gprovost reacted to nemo19 in Helios4 Support   
    I received and mounted the thermal pad from SolidRun on Tuesday. My Helios4 now runs cooler and quieter under maximum load than when idling before. It's been running for 1 day 16 hours at 70% to 100% cpu load hashing files, staying below 80C and without crashing.
     
    If it keeps going like this I'll be very happy. Thank you for the great board and the great service!
  24. Like
    gprovost got a reaction from chwe in Helios4 Support   
    @bigheart Is the serial issue you described happens all the time after over night running ?
     
    The helios4 board has an onboard FTDI USB-to-UART bridge (FT230X), this way you don't need to have a RS232 port on your computer or to buy a USB-to-Serial converter.
    The FT230X is actually power supplied by the USB port of your computer that you connect to the Heliso4 micro-USB console port, so even if the Helios4 board is not powered-on the FTDI USB-to-UART bridge will still come to live and should be detected by your computer.
     
    Assuming your computer is a Linux machine, as soon as you connect the Helios4 console port you should see the following in your kernel messages
    $> dmesg -w [...] [684438.411938] usb 1-2: new full-speed USB device number 18 using xhci_hcd [684438.547622] usb 1-2: New USB device found, idVendor=0403, idProduct=6015 [684438.547631] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [684438.547636] usb 1-2: Product: FT230X Basic UART [684438.547639] usb 1-2: Manufacturer: FTDI [684438.547643] usb 1-2: SerialNumber: DN00KDNN [684438.551627] ftdi_sio 1-2:1.0: FTDI USB Serial Device converter detected [684438.551703] usb 1-2: Detected FT-X [684438.552290] usb 1-2: FTDI USB Serial Device converter now attached to ttyUSB0  
    If you unplug you should see the following USB disconnect message
    [684398.900050] usb 1-1: USB disconnect, device number 17 [684398.900432] ftdi_sio ttyUSB0: FTDI USB Serial Device converter now disconnected from ttyUSB0 [684398.900463] ftdi_sio 1-1:1.0: device disconnected  
    So when the serial issue you described happens again, you should try to unplug / plug the console port and see if you see the above messages appearing in you kernel messages.
     
    If the expected kernel messages appear but you still can't access the shell console of the Linux running on your Helios4, then the issue is somewhere else than the onboard FTDI USB-to-UART bridge (unless a very unlikely issue with the Dual-Buffer IC between the FTDI chip ans the SoM) . In this case you should post the full dmesg ouput of your Helios4 when the issue appear (use https://pastebin.com/ to share the output to us).
  25. Like
    gprovost reacted to tkaiser in Support of Helios4 - Intro   
    Good news: @zador.blood.stained imported (cherry-picked) vendor provided software support for Helios4 into our build system recently, I let on my host build an OMV image and it seems it's working out of the box: https://github.com/armbian/build/pull/812#issuecomment-342006038 (though some minor issues present we can focus now on).