Jump to content

gprovost

Members
  • Posts

    580
  • Joined

  • Last visited

Reputation Activity

  1. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    Armbian Buster is ready, but we are still working at upgrading Linux Kernel and U-boot for Default and Next Armbian branch. Making the upcoming release a major Armbian release For Helios4. New OS images and Debian packages should be published in one to 2 weeks time max.
  2. Like
    gprovost reacted to lanefu in [RFC 001] Changes for boards and features implementing   
    I've made it easier to apply RFC tags on the forum itself.  That will make it easier to search.

    A slight tweak:  Instead of assigning a [00x] sequence number to "approved" RFC,  I'm going to give RFCs the issue number generated by github.
  3. Like
    gprovost reacted to aprayoga in Helios4 Support   
    @MarcC it is possible to write U-Boot directly to SATA disk but currently no U-Boot image for SATA available. and AFAIK,  the procedure a bit different but more similar like PC.  Write U-Boot SPL to disk boot sector then put u-boot.bin into FAT formatted 1st partition.
    We are still experimenting with this, can not say when it would be ready.
     
  4. Like
    gprovost reacted to aprayoga in Helios4 Support   
    Hi,
    yes, you can see the boot process from serial. you can follow the instruction in our wiki. After the serial terminal ready, you can press the reset button to see the boot process since very beginning.
    You should see something like:
    U-Boot SPL 2018.11-armbian (Jan 20 2019 - 20:02:45 -0800) High speed PHY - Version: 2.0 Detected Device ID 6828 board SerDes lanes topology details: | Lane # | Speed | Type | -------------------------------- | 0 | 6 | SATA0 | | 1 | 5 | USB3 HOST0 | | 2 | 6 | SATA1 | | 3 | 6 | SATA3 | | 4 | 6 | SATA2 | | 5 | 5 | USB3 HOST1 | -------------------------------- High speed PHY - Ended Successfully mv_ddr: mv_ddr-armada-17.10.4 DDR3 Training Sequence - Switching XBAR Window to FastPath Window DDR Training Sequence - Start scrubbing DDR3 Training Sequence - End scrubbing mv_ddr: completed successfully Trying to boot from MMC1 U-Boot 2018.11-armbian (Jan 20 2019 - 20:02:45 -0800) SoC: MV88F6828-A0 at 1600 MHz DRAM: 2 GiB (800 MHz, 32-bit, ECC enabled) MMC: mv_sdh: 0 Loading Environment from MMC... *** Warning - bad CRC, using default environment Model: Helios4 Board: Helios4 SCSI: MVEBU SATA INIT Target spinup took 0 ms. SATA link 1 timeout. AHCI 0001.0000 32 slots 2 ports 6 Gbps 0x3 impl SATA mode flags: 64bit ncq led only pmp fbss pio slum part sxs Net: Warning: ethernet@70000 (eth1) using random MAC address - 52:fc:90:b3:be:70 eth1: ethernet@70000 Hit any key to stop autoboot: 3 2 1 0 switch to partitions #0, OK mmc0 is current device Scanning mmc 0:1... Found U-Boot script /boot/boot.scr 1979 bytes read in 104 ms (18.6 KiB/s) ## Executing script at 03000000 Boot script loaded from mmc 220 bytes read in 86 ms (2 KiB/s) 19717 bytes read in 353 ms (53.7 KiB/s) 4714605 bytes read in 852 ms (5.3 MiB/s) 5460016 bytes read in 1037 ms (5 MiB/s) ## Loading init Ramdisk from Legacy Image at 02880000 ... Image Name: uInitrd Created: 2019-02-07 11:42:01 UTC Image Type: ARM Linux RAMDisk Image (gzip compressed) Data Size: 4714541 Bytes = 4.5 MiB Load Address: 00000000 Entry Point: 00000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 02040000 Booting using the fdt blob at 0x2040000 Using Device Tree in place at 02040000, end 02047d04 Starting kernel ... But i think you won't see that screen.
    LED8 near power inlet indicate the input voltage, if it's blink (nothing fancy here, just LED connected to 12V  3.3V) then it's a hardware problem. most probably the power adapter.
     
    Do you have a voltmeter? Could you measure the voltage on these pin

     
    to determine whether it's power adapter or on-board regulator failure.
  5. Like
    gprovost reacted to lanefu in [RFC 001] Changes for boards and features implementing   
    @igor I really would like to capture the all the tasks for refactoring the build scripts and track them as a project in github.

    To your point, if the tv boxes branch is no longer solving problems, then we may not want to invest effort into that branch


    Sent from my iPad using Tapatalk
  6. Like
    gprovost got a reaction from lanefu in [RFC 001] Changes for boards and features implementing   
    Can we have an update status on this RFC. What's the plan / roadmap ?
  7. Like
    gprovost reacted to lanefu in Helios4 Support   
    Have you created the filesystem, share, and the added the share to a service (SMB or NFS) and then enabled the service?  (it's kind of a long chain)
     

    can you run
    armbian-monitor -u  and share the link?
  8. Like
    gprovost reacted to lanefu in Helios4 Support   
    The Helios will keep up for that just fine. DLNA might be another option to consider over samba


    Sent from my iPad using Tapatalk
  9. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    What you say concern people mounting on a 64bit Kernel a File System created on 32 bit Kernel.
     
    This is not the issue we are talking about. Our limitation is that simply you cannot address a File System > 16TB with a 32bit Kernel because of page cache limitation. I'm not aware of possible work around even with other File System than EXT.
     
    The only solution is to use a FUSE union file system to merge two or more partitions into one big volume.
    MergerFS : https://github.com/trapexit/mergerfs (It's available as a Debian package.)
     
    @9a3eedi @fpabernard You guys don't want to give a try to MergerFS ?
     
     
     
  10. Like
    gprovost got a reaction from djurny in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    Just for info I updated the libssl1.0.2 cryptodev patch to work with latest libssl1.0.2 debian deb (libssl1.0.2_1.0.2r)
     
    https://wiki.kobol.io/cesa/#network-application-encryption-acceleration
     
    So if some people wanna test offload crypto operation while using apache2 or ssh for example ;-) You can also find a pre-build dpkg here if you want to skip the compile steps.
  11. Like
    gprovost reacted to jimandroidpc in Helios4 Support   
    I checked a new encryption and it still says XTS - so from what i can tell its not applied. I may just wait, thanks for looking into it - if I can help id be happy to but im learning as I go
  12. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    Hahah cool to hear that. The idea of Helios4 was to fulfill one specific scope : NAS. There are a lot of awesome boards out there but often they are trying to do too many things.
    For instance if we do a refresh of Helios4 with a new SoC that has some display output, I'm pretty sure we would decide to not expose it... in order to stick to a pure headless NAS server concept.
    Well we hope we will carry on this path of "disruptive" DIY NAS solution with future project, but it's not tomorrow that we will steal the market share of entry level proprietary NAS
  13. Like
    gprovost reacted to djurny in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    L.S.,
    A quick update on the anecdotal performance of LUKS2 over LUKS;
     
    md5sum'ing ~1 TiB of datafiles on LUKS:
    avg-cpu: %user %nice %system %iowait %steal %idle 0.15 20.01 59.48 2.60 0.00 17.76 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sdb 339.20 84.80 0.00 848 0 dm-2 339.20 84.80 0.00 848 0 md5sum'ing ~1 TiB of datafiles on LUKS2:
    avg-cpu: %user %nice %system %iowait %steal %idle 0.05 32.37 36.32 0.75 0.00 30.52 Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn sdd 532.70 133.18 0.00 1331 0 dm-0 532.80 133.20 0.00 1332 0 sdb:
    sdb1 optimally aligned using parted. LUKS(1) w/aes-cbc-essiv:256, w/256 bit key size. XFS with 4096 Bytes sector size. xfs_fsr'd regularly, negligible file fragmentation.  sdd:
    sdd1 optimally aligned using parted. LUKS2 w/aes-cbc-essiv:256, w/256 bit key size and w/4096 Bytes sector size, as @matzman666 suggested. XFS with 4096 Bytes sector size.  
    Content-wise: sdd1 is a file-based copy of sdb1 (about to wrap up the migration from LUKS(1) to LUKS2).
     
    Overall a very nice improvement!
     
    Groetjes,
     
    p.s. Not sure if it added to the performance, but I also spread out the IRQ assignments over both CPUs, making sure that each CESA and XOR engine have their own CPU. Originally I saw that all IRQs are handled by the one and same CPU. For reasons yet unclear, irqbalance refused to dynamically reallocate the IRQs over the CPUs. Perhaps the algorithm used by irqbalance does not apply well to ARM or the Armada SoC (- initial assumption is something with cpumask being reported as '00000002' causing irqbalance to only balance on- and to CPU1?).
  14. Like
    gprovost reacted to Heisath in Helios4 Support   
    One other thing to check if the disks are faulty / just doing the clicking -> connect them to some computer. Listen there.
     
     
  15. Like
    gprovost reacted to tuxd3v in Helios4 Support   
    Hello,
    Here is the Link for Seagate SeaChest Utilities Documentation:
    Seagate SeaChest Utilities
     
    In case any one has Seagate ironwolfs or another Seagate disks.
     
    I have been kidding around with this tools..
    With this base settings:
    # Enable Power UP in Standby # Its a persistent feature # No Power Cycle required ${SEA_PATH}/openSeaChest_Configure --device /dev/sda --puisFeature enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --puisFeature enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --puisFeature enable # Enable Low Current SpinUP # Its a persistent Feature # Requires a Power Cycle ${SEA_PATH}/openSeaChest_Configure --device /dev/sda --lowCurrentSpinup enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --lowCurrentSpinup enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --lowCurrentSpinup enable Unfortunatly FreeFall feature is NOT supported at least in my IronWolfs 4TB,
    Which would be a must to control aceleration levels ...( I already  killed a WD Red 3TB..with a freefall .. of 1 meter os so..)
    This Feature could be present but doesn´t permit changing levels..at least it seems so..
    root@helios4:~# openSeaChest_Configure --device /dev/sda --freeFall info|grep -E "dev|Free" /dev/sg0 - ST4000VN008-XXXXXX - XXXXXXXX - ATA Free Fall control feature is not supported on this device. Could it be related with the last release of SeaChest tools? I don´ t know..
    If someone knows how to tune acceleration levels, I appreciate..
     
    In the mean time,
    I made some basic tests with Models Seagate IronWolf 4TB ST4000VN008( 3 discs):
    APM Mode Vs Power Comsuption( Power Measured on the Wall...it includes helios4 power consumption )
     
    1) With CPU iddle at 800Mhz, and with disks at minimun APM power mode 1 .
        Power consumption is between 4.5-7 Watts, maybe average is around 6-6.5 Watts
        1.1) When Issuing a 'hdparm -Tt' on all disks,
               Power peaks at 30 Watts Max, but stays in  average at around 15-25 Watts,
        1.2) Then goes down to arround 6 - 6.5 Watts.. mode 1
     
    2) With CPU iddle at 800Mhz, and with disks at APM power mode 127,
        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 127 )
        2.1) When issuing a 'hdparm -Tt'  on all disks,
               Power peaks at 34 Watts, But it is around 13-25 Watts, with majority of time between 17-23 Watts..
        2.2) Then it enters in mode 1.
     
    3) With CPU iddle at 800Mhz, and with disks at APM power mode 128,
        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 128 )
        3.1) When issuing a 'hdparm -Tt'  on all disks,
              Power peaks at 34 Watts, but it is around 13-25 watts, with majority of time between 15-23 watts..
        3.2) Then goes down to around 15-17 Watts, for some time, ..majority of time ~ 16.5 Watts..
              I haven´ t checked its final state has only monitored by some 10-15 minutes
     
    4) With CPU iddle at 800Mhz, and with disks at APM power mode 254( max performance )
        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 254 )
        4.1) When issuing a 'hdparm -Tt'  on all disks,
               Power peaks at ~34 Watts, but it is around 13-28 watts, with majority of time between 17-23 watts..
        4.2) Then bounces between 13-20 Watts, for some time, maybe with average around 19Watts.. 
               I haven´ t checked its final state has only monitored by some 10-15 minutes
               I assume they enter in mode 128..
     
    In my Opinion power mode 127 'its a best fit' for power consumption vs performance..
    It penalize a bit the performance, but an in all, it seems the best fit, with the bonus of ~ 6.5Watts in Idle.
     
    So I issue on reboot or start:
    -- Enable APM level to 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sda --setAPMLevel 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdb --setAPMLevel 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdc --setAPMLevel 127  
  16. Like
    gprovost reacted to Harvey in Helios4 Support   
    I have updated my lcd4linux.conf for a cheap AX206 picture frame (320x240) connected to my Helios4.
    If someone is interested: https://github.com/hkramski/my-lcd4linux-conf.
     
     
  17. Like
    gprovost reacted to Igor in Helios4 Support   
    Do that - congrat!  I think we have the problem under control now.
  18. Like
    gprovost reacted to Igor in Helios4 Support   
    Can check in a day or two. Now have family/life troubles. Images are done:
    https://dl.armbian.com/helios4/Debian_stretch_next.7z.torrent
    https://dl.armbian.com/helios4/Ubuntu_bionic_next.7z.torrent
  19. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    Let me explain a bit more the issue that was experienced by @integros @lanefu and could happen to anyone that did a fresh install using release 5.67 and then attempting a system upgrade.
     
    This boot issue happens because new u-boot package (5.70) doesn't anymore update automatically the u-boot bootloader binary on target however bsp package (aka linux-****-root-next-helios4_5.**_armhf.deb) still update the bootscript. Therefore it will create an incompatibility case between bootloader and bootscript that we experience already in the past following upgrade form u-boot 2013 to u-boot 2018. We did address this incompatible use case previously but we didn't foresee it would happen again because of that recent decision that u-boot package shouldn't anymore be automatically updated.
     
    We unfortunately created a tricky situation and we need to find a solution that works for everyone,  that includes people running release 5.67, they should be able to upgrade without being aware of this issue. In the meantime :
     
    To everybody, when you do a fresh install please don't use version 5.67 on Armbian website, but use our latest image build (5.68) that is available on our wiki here until a new image build is done by Armbian.
     
  20. Like
    gprovost got a reaction from lanefu in Helios4 Support   
    Sorry guys for late reply. I'm getting soon married and I'm pretty overloaded between Helios4 delivery follow-up and taking care of this big personal event. So please bare the latency in the coming days. Everything will be back to normal in February.
     
    @gabest To be honest we haven't done much experiment with ZFS but we have couple of people that reported good feedback and performance with ZoL on Helios4. Most of them where using mirror vdev instead of raidz mode, you can find plenty of discussion on this topic but it's an important factor in the case of Helios4. The other important factor for good ZFS perf on Helios4 is deduplication. Actually the dedup mode is the main reason why ZFS need so much memory, so with 2GB in Helios4, you need to disable dedup if you want good perf.
    In regards to ZoL maturity / stability on 32bit system, I don't have much insight on this. I just know that starting v0.7.0 some improvement were made for 32-bit stability. So for this reason it is recommended to use ZFS from stretch-backports (https://packages.debian.org/stretch-backports/zfsutils-linux)
     
    @djurny Actually we modified fancontrol on purpose in order to set the fan speed to 0 when fancontrol exit. This was to address the use case when someone power down the system, in this case you don't want the fan to go full speed. But after the message from Harvey and my response below, I think there is a bit of contraction in our logic. Let me think about this and we might just revert the fancontrol to its original behavior.
     
    @Harvey Helios4 Power Management is rather simple since it was designed for a system that is running 24/7 (or in IDLE/DEEP IDLE state if you want to use Wake-on-LAN).  We didn't include a PMIC in our design to address this specific use case of powered off system. When you halt your system, the Helios4 SoC power domain will remain ON and since there is no more OS running so there no more CPU Dynamic Frequency Scaling (DFS), so my guess is the SoC is running at its highest clock when system is halted compared to idle. This would explain the difference between in power consumption System IDLE and System Powered-Off. However we will need to double check that.
     
    @djurny Humm very good point. When I was doing benchmark during the early stage of the project, it didn't get to my mind to check the /proc/interrupts. Only later when working on the CESA engine I figured out checking the interrupts was the way to check if engines were used to offload operations. It completely slipped my mind to do the same check again for XOR engines. Well thanks to you, I can see my early assumption was wrong. We will need to investigate how to force system to use the MV_XOR and how it would improve performance and/or system load.
  21. Like
    gprovost reacted to djurny in Helios4 - Cryptographic Engines And Security Accelerator (CESA) Benchmarking   
    @matzman666 Sorry, no measurements. From memory the numbers for raw read performance were way above 100MB/sec according to 'hdparm -t'. Currently my box is live, so no more testing with empty FSes on unencrypted devices for now. Perhaps someone else can help out?
     
    -[ edit ]-
    So my 2nd box is alive. The setup is slightly different and not yet complete.
     
    I quickly built cryptsetup 2.x from sources on Armbian, was not as tough as I expected - pretty straightforward: configure, correct & configure, correct & ...
     
    cryptsetup 2.x requires the following packages to be installed:
    uuid-dev libdevmapper-dev libpopt-dev pkg-config libgcrypt-dev libblkid-dev  
    Not sure about these ones, but I installed them anyway:
    libdmraid-dev libjson-c-dev  
    Build & install:
    Download cryptsetup 2.x via https://gitlab.com/cryptsetup/cryptsetup/tree/master.  Unxz the tarball. ./configure --prefix=/usr/local make sudo make install sudo ldconfig  
    Configuration:
    2x WD Blue 2TB HDDs. 4KiB sector aligned GPT partitions. mdadm RAID6 (degraded).  
    Test:
    Write 10GiB 4GiB worth of zeroes; dd if=/dev/zero of=[dut] bs=4096 count=1048576 conv=fsync. directly to mdadm device. to a file on an XFS filesystem on top of an mdadm device. directly to LUKS2 device on top of an mdadm device (512B/4096KiB sector sizes for LUKS2). to a file on an XFS filesystem on top of a LUKS2 device on top of an mdadm device (512B/4096KiB sector sizes for LUKS2).  
    Results: 


    Caveat:
    CPU load is high: >75% due to mdadm using CPU for parity calculations. If using the box as just a fileserver for a handful of clients, this should be no problem. But if more processing is done besides serving up files, e.g. transcoding, (desktop) applications, this might become problematic. RAID6 under test was in degraded mode. I don't have enough disks to have a fully functional RAID6 array yet. No time to tear down my old server yet. Having a full RAID6 array might impact parity calculations and add 2x more disk I/O to the mix.  
    I might consider re-encrypting the disks on my first box, to see if LUKS2 w/4KiB sectors will increase the SnapRAID performance over the LUKS(1) w/512B sectors. Currently it takes over 13 hours to scrub 50% on a 2-parity SnapRAID configuration holding less than 4TB of data.
     
    -[ update ]-
    Additional test:
    Write/read 4GiB worth of zeroes to a file on an XFS filesystem test on armbian/linux packages 5.73 (upgraded from 5.70) for i in {0..9} ;
    do time dd if=/dev/zero of=removeme.${i} bs=4096 count=$(( 4 * 1024 * 1024 * 1024 / 4096 )) conv=fsync;
    dd if=removeme.$(( 9 - ${i} )) of=/dev/null bs=4096 ;
    done 2>&1 | egrep '.*bytes.*copied.*'  
    Results:

     
    The write throughput appears to be slightly higher, as now the XOR HW engine is being used - but it could just as well be measurement noise.
    CPU load is still quite high during this new test:
    %Cpu0 : 0.0 us, 97.5 sy, 1.9 ni, 0.3 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.3 us, 91.4 sy, 1.1 ni, 0.0 id, 0.0 wa, 0.0 hi, 7.2 si, 0.0 st <snip> 176 root      20   0       0      0      0 R  60.9  0.0  14:59.42 md0_raid6                                                     1807 root      30  10    1392    376    328 R  45.7  0.0   0:10.97 dd 9087 root       0 -20       0      0      0 D  28.5  0.0   1:16.53 kworker/u5:1 34 root       0 -20       0      0      0 D  19.0  0.0   0:53.93 kworker/u5:0                                                       149 root     -51   0       0      0      0 S  14.4  0.0   3:19.35 irq/38-f1090000                                                 150 root     -51   0       0      0      0 S  13.6  0.0   3:12.89 irq/39-f1090000                                                5567 root      20   0       0      0      0 S   9.5  0.0   1:09.40 dmcrypt_write <snip> Will update this again once the RAID6 array set up is complete.
     
    Groetjes,
     
     
  22. Like
    gprovost reacted to hatschi1000 in Helios4 Support   
    Talking about anecdotes: Some of you might remember the issue of booting my helios4 with 2 HGST Deskstar HDDs I had approx. half a year ago (https://forum.armbian.com/topic/6033-helios4-support/?do=findComment&comment=57981).

    After we weren't able to find a solution on why the specific problem appeared here on the forum, I got in contact with @gprovost directly and after some back and forth messaging he kindly asked for the defective board to be sent back to Singapore. Soon after I received a fixed board back with which the problem did not appear anymore.

    Right now I'm still tinkering around my setup (2x2TB HDD in btrfs-RAID1 underneath OMV4 with (at some point at least) an automated offsite backup using btrfs-snapshots), without having had any significant problems - a big thumbs up to @gprovost and the entire helios team for the straightforward communication and flawless exchange of the faulty carrier board.
  23. Like
    gprovost reacted to lanefu in Helios4 Support   
    just an anecdote.    I received my Helios4 (from second production run) on Christmas eve.. what a present!

    Anyway I've got it up and running with just the stable armbian stretch image with OMV running raid 5 on 3x3TB enterprise drives and the thing is just a work of art.   I wish we'd get more SBCs on the market like this thing... not everyone needs a TV box.
  24. Like
    gprovost reacted to Caribou in Helios 4 connection TV   
    Hello @gprovost Thanks for your answer. Then, for me It will be a raspberry with raspian + kodi.
     
     
    Strongly the 3rd batch
  25. Like
    gprovost got a reaction from JakeK in Helios4 Support   
    We have been investigating the issue of the network interface that is sometimes not properly initialized (as reported by @JakeK) and we found the issue. During u-boot board initialization, the network PHY is supposed to be reset by toggling a GPIO (GPIO19). Unfortunately in our u-boot implementation the PHY reset call was happening too early when the SoC pin muxing wasn't completed yet, which means the GPIO pull up and pull down wasn't physically happening.
     
    We have added the fix to our u-boot repo  : https://github.com/helios-4/u-boot-marvell/commit/15c179624b28ddab7d212a0ef0571bcec91cf2ed
     
    @Igor Any chance you can trigger a build of just Helios4 u-boot and publish the u-boot .deb in the armbian repo ? This way everyone can easily get the fix by doing an upgrade. Thanks.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines