Jump to content

gprovost

Members
  • Posts

    580
  • Joined

  • Last visited

Reputation Activity

  1. Like
    gprovost reacted to allen--smithee in Very noisy fans   
    hi @bunducafe
    in order of price, time and efficiency. 
    a) you can play with the parameters of fancontrol to find a passive / active fan compromise, and limit the rotational speed and therefore the noise of the fans. (/etc/fancontrol). 
    b) you can also lower the frequency of the processor to reduce its TDP, (armbian-config)
    c) i do not know the specifications of the thermal bridge (W.mK and thickness) between the CPU and the radiator, maybe this can be replaced to improve thermal conductivity between them. (<10euro)
    d) New Fans (>30euros)
     
     
  2. Like
    gprovost reacted to tionebrr in Mystery red light on helios64   
    @gprovost It still crashes but only after a lot more time (days). It may be a separate issue because those crashes are not ending in a blinking red led. I even suspect that the system is still partially up when it happens, but didn't verified. I've found an old RPi 1 in a drawer yesterday, so I will be able to tell more the next time it happens.
  3. Like
    gprovost reacted to lalaw in USB-C DAS (Is it or is it NOT supported?)   
    Documenting my full process:
     
    1. Put helios64 into recovery mode.
    2. Use BalenaEtcher to write Armbian_20.11.10_Helios64_buster_current_5.9.14.img.xz to eMMC.  Reboot
    3. Go through 1st login on helios64, setup userrname/password.  Reboot
    4. Go through armbian-config to install OMV.  Reboot
    5. Per 25-NOV post, created dwc3-0-device.dts and added as an overlay.  Reboot
     
     
    6. Login to OMV. Setup RAID 10 array.  
     
    ... To be updated after array is synced.
     
    Side note, I remembered that I have a 5th drive that is not part of the array that I could try to mount:
    Unfortunately, no device detected on the Mac.
     
  4. Like
    gprovost reacted to Mangix in Random system freezes   
    More info on DFS brokenness: https://gitlab.nic.cz/turris/openwrt/-/issues/347#note_195030
  5. Like
    gprovost reacted to aprayoga in Mystery red light on helios64   
    Hi @snakekick, @clostro, @silver0480, @ShadowDance
    could you add following lines to the beginning of /boot/boot.cmd
    regulator dev vdd_log regulator value 930000 regulator dev vdd_center regulator value 950000 run
    mkimage -C none -A arm -T script -d /boot/boot.cmd /boot/boot.scr and reboot. Verify whether it can improve stability.
     
     
     
    You could also use the attached boot.scr
     
     
    boot.scr
  6. Like
    gprovost reacted to Lore in Hanging on kernel load after Shutdown/Reboot/Kernel panic   
    I was able to get it booting.  The following is the error that was causing it to hang.
     
    Begin: Running /scripts/local-premount ... Scanning for Btrfs filesystems done. Begin: Will now check root file system ... fsck from util-linux 2.33.1 [/sbin/fsck.ext4 (1) -- /dev/mmcblk0p1] fsck.ext4 -a -C0 /dev/mmcblk0p1 /dev/mmcblk0p1: recovering journal /dev/mmcblk0p1 contains a file system with errors, check forced. /dev/mmcblk0p1: Inode 753 seems to contain garbage. /dev/mmcblk0p1: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) fsck exited with status code 4 done. Failure: File system check of the root filesystem failed The root filesystem on /dev/mmcblk0p1 requires a manual fsck  
    I then just ran
    fsck.ext4 /dev/mmcblk0p1  
     
    After that it ran and booted to OMV on its own.
  7. Like
    gprovost reacted to mortomanos in Reboot not working properly with OMV   
    Support contacted me and gave me an armbian overlay, the first reboot did work without problems. I will observe further, but for now it looks good.
  8. Like
    gprovost reacted to Igor in ZFS on Helios4   
    There are problems with 32bit ZFS support and there is nothing more we can do. 
    https://github.com/armbian/build/pull/2547 
    https://github.com/openzfs/zfs/issues/9392
    Broken od Debian, buildable on Ubuntu based Armbian. At least on latest (LTS) kernel 5.10.y

    Upgrade to 5.10.y on 32bit target doesn't work automatically ...
     
     after reboot you need to run 
    dpkg-reconfigure zfs-dkms and select that you agree with 32b troubles but at the end, it works. I could load my pool without issues.
     
  9. Like
    gprovost reacted to Mangix in Helios4 performance concerns   
    A more performant alternative to AF_ALG... https://github.com/cotequeiroz/afalg_engine
     
    Not a solution to your issues unfortunately.
     
    IIRC, there was a way to overclock the helios to 2.0ghz or 1.8ghz. Don't remember the details.
  10. Like
    gprovost reacted to ShadowDance in SATA issue, drive resets: ataX.00: failed command: READ FPDMA QUEUED   
    Hey, sorry I haven't updated this thread until now.
     
    The Kobol team sent me, as promised, a new harness and a power-only harness so that I could do some testing:
    Cutting off capacitors from the my original harness did not make a difference The new (normal) harness had the exact same issue as the original one With the power-only harness and my own SATA cables, I was unable to reproduce the issue (even at 6 Gbps) Final test was to go to town on my original harness and cut the connector in two, this allowed me to use my own SATA cable with the original harness and there was, again, no issue (at 6 Gbps) Judging from my initial results, it would seem that there is an issue with the SATA cables in the stock harness. But I should try to do this for a longer period of time -- problem was I didn't have SATA cables for all disks, once I do I'll try to do a week long stress test. I reported my result to the Kobol team but haven't heard back yet.
     
    Even with the 3.0 Gbps limit, I still occasionally run into this issue with the original harness, has happened 2 times since I did the experiment.
     
    If someone else is willing to repeat this experiment with a good set of SATA cables, please do contact Kobol to see if they'd be willing to ship out another set of test harnesses, or perhaps they have other plans.
     
    Here's some pics of my test setup, including the mutilated connector:
     

  11. Like
    gprovost reacted to qtmax in Helios4 performance concerns   
    I found the offending commit with bisect: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5ff9f19231a0e670b3d79c563f1b0b185abeca91
     
    Reverting it on 5.10.10 restores the write performance, while not hurting read performance (i.e. keeps it high).
     
    I'm going to submit the revert to upstream very soon — it would be great if it also could be applied as a patch to armbian.
     
    (I would post a link here after I submit the patch, but this forum won't allow me to post more than one message a day — is there any way to lift this limitation?)
     
    @gprovost, could you reproduce the bug on your NAS?
  12. Like
    gprovost got a reaction from djurny in Feature / Changes requests for future Helios64 board or enclosure revisions   
    RK3399 has a single PCIe 2.1 x 4 lanes port, you can't split the lanes for multi interfaces unless you use a PCIe switch which is an additional component that would increase a lot the board cost.
     
    That's clearly the intention, we don't want either to restart form scratch the software and documentation :P
     
    Yes this will be fixed, it was a silly design mistake. We will also make the back panel bracket holder with rounder edges to avoid user to scratch their hands :/
     
    We will post soon how to manage fan speed based on HDD temperature (using hddtemp tool), that would make more sense than current approach.
    You can already find an old example : https://unix.stackexchange.com/questions/499409/adjust-fan-speed-via-fancontrol-according-to-hard-disk-temperature-hddtemp
     
  13. Like
    gprovost reacted to Lore in Hanging on kernel load after Shutdown/Reboot/Kernel panic   
    I have run into this issue multiple times and the process flow is this:
     
    Flash image "Armbian_20.11.6_Helios64_buster_current_5.9.14.img" to a micro SD card.
    Run through the config (installing OMV with SAMBA)
    Kick over to the OMV to configure the share / media so its available on my network
    Runs for some time (this last install was running for maybe a few weeks before it triggered a Kernel panic)
    Power off.
    Power on.
    Board boots, hangs on "Starting Kernel..."
     
    Is there some trick I am missing to get this thing to actually boot / reboot correctly?
     
    Troubleshooting info:
    No jumpers are shorted.
    I had a similar issue with a previous img as well.
  14. Like
    gprovost reacted to qtmax in Helios4 performance concerns   
    I have a Helios4 (2nd batch) with 4 HDDs, and before I started using it, I decided to test its performance on the planned workload. However, I got quite not satisfied with the numbers.
     
    First, the SSH speed is about 30 MB/s, while the CPU core where sshd runs is at 100% (so I suppose it's the bottleneck).
     
    I create a sparse file and copy it over SSH: helios4# dd if=/dev/null bs=1M seek=2000 of=/tmp/blank With scp I get up to 31.2 MB/s: pc# scp root@helios4:/tmp/blank /tmp With rsync I get pretty much the same numbers: pc# rm /tmp/blank pc# rsync -e ssh --progress root@helios4:/tmp/blank /tmp/ blank 2,097,152,000 100% 31.06MB/s 0:01:04 (xfr#1, to-chk=0/1) (I tested both, because with another server, rsync over ssh runs faster - up to 80 MB/s, while scp is still at about 30 MB/s.)  
    Am I'm planning to use sftp/ssh to access the files, such speed is absolutely unsatisfactory. Is it possible to tune it somehow to be much closer to the line rate of 1 Gbit/s?
     
    The second question is about the best RAID level for this setup. My HDD's read speed is about 250 MB/s, and if I combine 4 of them into a RAID0, the read speed reaches 950 MB/s on another computer, but when I put these disks into the Helios4, the max read speed is about 350 MB/s. Given that the network speed is 125 MB/s, it doesn't seem useful to use any forms of striping, such as RAID0, RAID10 (far), etc., does it? What is the recommended RAID configuration for this system then, given that I have 4 HDDs?
     
    Would RAID10 (near) be better than RAID10 (far)? RAID10 (far) should provide the same read speed as RAID0 for sequential read, however, as on Helios4 the speed is far from 4x (it's just 1.4x), and the network line rate is anyway lower, maybe it makes more sense to optimize for parallel reads. RAID10 (near) will occupy only 2 disks on a single-streamed read (as opposed to 4 with RAID10 (far)), so a second reading stream can use two other disks and improve performance by avoiding seeking to different locations on a single disk. Does this layout make sense, or am I missing something?
     
    I'm also considering RAID6 that has better redundancy than RAID10 at the cost of lower speed and higher CPU usage. While lower speed might not be a problem if it's still higher than the line rate of 1 Gbit/s, CPU may be the bottleneck. As I see from the SSH test above, even such a simple test occupies 100% of one core and some part of the other core, so running RAID6 may increase the CPU demands and cause performance decrease.
     
    One more option is RAID1 over LVM that would combine pairs of disks, but I don't see any apparent advantage over RAID10 (near).
     
    Does anyone have experience running RAID10 (near/far), RAID6 or maybe other configurations with Helios4? What would you advise to build out of 4 HDDs 8 TB each if my goal is to have redundancy and read/write speeds close to the line rate?
  15. Like
    gprovost got a reaction from Borromini in Feature / Changes requests for future Helios64 board or enclosure revisions   
    RK3399 has a single PCIe 2.1 x 4 lanes port, you can't split the lanes for multi interfaces unless you use a PCIe switch which is an additional component that would increase a lot the board cost.
     
    That's clearly the intention, we don't want either to restart form scratch the software and documentation :P
     
    Yes this will be fixed, it was a silly design mistake. We will also make the back panel bracket holder with rounder edges to avoid user to scratch their hands :/
     
    We will post soon how to manage fan speed based on HDD temperature (using hddtemp tool), that would make more sense than current approach.
    You can already find an old example : https://unix.stackexchange.com/questions/499409/adjust-fan-speed-via-fancontrol-according-to-hard-disk-temperature-hddtemp
     
  16. Like
    gprovost reacted to hartraft in Feature / Changes requests for future Helios64 board or enclosure revisions   
    Whilst i'm satisfied with my Helios64 as well, i would have liked:
    the original RK3399Pro as the NPU would have been useful in tensorflow for Photoprism (as a Google photos replacement). RK3588 would be even nicer but everyone is waiting on that chip more RAM, once Photoprism indexes photos, it easily runs out of memory. I limit the memory in docker now but its not ideal. Granted, there is an issue on their code so i just keep the container restarted each night NVMe for fast disk access Better HDD trays (already mentioned) the screw eyelets for the backpanel (1st pic below) not to be directly aligned with the eyelets for the backplane (2nd pic below) for easier assembly.
  17. Like
    gprovost reacted to Salamandar in Feature / Changes requests for future Helios64 board or enclosure revisions   
    I added voting options, please revote !
     
    EDIT: crap… We can't edit our votes, I did not expect that
  18. Like
    gprovost reacted to ebin-dev in Feature / Changes requests for future Helios64 board or enclosure revisions   
    The next thing I would buy is a drop in replacement board for the helios64 with:
    rk3588 ECC RAM  nvme PCI-e port 10GBase-T port
  19. Like
    gprovost reacted to clostro in Feature / Changes requests for future Helios64 board or enclosure revisions   
    - I don't see the point of baking a m.2 or any future nvme PCI-e port on the board honestly. Instead I would love to see a PCI-e slot or two exposed. That way end user can pick what expansion is needed. It can be an nvme adapter, can be 10gbe network, or SATA or USB or even a SAS controller for additional external enclosures. It will probably be more expensive for the end user but will open up a larger market for the device. We already have 2 lanes/10gbps worth of PCIe not utilized as is (correct me on this).
    - Absolutely more/user upgradeable and ECC ram support.
    - Better trays, you guys they are a pain right now lol. ( Loving the black and purple though )
    - Some sort of active/quiet cooling on the SoC separate from the disks, as any disk activity over the network or even a simple CPU bound task ramps up all the fans immediately.
    - Most important of all, please do not stray too far from the current architecture both hardware and firmware wise. Aside from everything else, we would really love to see a rock solid launch next time around. Changing a lot of things might cause most of your hard work to-date to mean nothing.
  20. Like
    gprovost reacted to yalman in Virtualization support   
    I found a hothotfix solution without knowing the (side) effects.
     
    # 0,1,2,3=Quad-core Cortex-A53 up to 1.4GHz
    # 4,5=Dual-core Cortex-A72 up to 1.8GHz
    root@helios64:~# cat /proc/cpuinfo |grep -e processor -e part
    processor    : 0
    CPU part    : 0xd03
    processor    : 1
    CPU part    : 0xd03
    processor    : 2
    CPU part    : 0xd03
    processor    : 3
    CPU part    : 0xd03
    processor    : 4
    CPU part    : 0xd08
    processor    : 5
    CPU part    : 0xd08
     
    # always assign the same cpu types to a VM. core 0,1 for the host system
    virsh edit VM1
    ...
    <vcpu placement='static' cpuset='2-3'>2</vcpu>
    ...
    virsh edit VM2
    ...
    <vcpu placement='static' cpuset='4-5'>2</vcpu>
    ...
  21. Like
    gprovost reacted to djurny in Helios64 - freeze whatever the kernel is.   
    Hi @SR-G,
    FYI I have been running Linux kobol0 5.9.13-rockchip64 #trunk.16 SMP PREEMPT Tue Dec 8 21:23:17 CET 2020 aarch64 GNU/Linux for some days now. Still up. Perhaps you can give this one a try?
    Distributor ID: Debian Description: Debian GNU/Linux 10 (buster) Release: 10 Codename: buster  
    /etc/default/cpufrequtils:
    ENABLE=true MIN_SPEED=408000 MAX_SPEED=1200000 GOVERNOR=powersave (The box runs idles with 'powersave' and will be set to 'performance' when performing tasks.)
     
    I do not know which non-dev kernel version this is? I assume it's 21.02 or 21.03?

    Groetjes,
     
  22. Like
    gprovost reacted to ShadowDance in Mystery red light on helios64   
    It's probably indicating a kernel panic. You could try limiting CPU speed and setting the governor as suggested in the linked topic. You could also try hooking up the serial port to another machine and monitor it for any output indicating why the crash happened.
     
  23. Like
    gprovost reacted to barnumbirr in Helios64 Ethernet Issue   
    No, don't need to perform fix if you're only ever going to use the 1Gbps interface AFAIK.
  24. Like
    gprovost reacted to Salamandar in Feature / Changes requests for future Helios64 board or enclosure revisions   
    Hi everyone !
    I'm really happy with my Helios64, running fine with no major issues.
    BUT
    This product can obviously be improved with future revisions, while keeping the same base. The Kobol guys told me to open a thread here to discuss what would be great improvement or new features.
     
    I'm starting with things off my mind, but please ask me to add things to this list. An explanation would be great in the comments.
  25. Like
    gprovost got a reaction from Salamandar in Ethernet 1Gbps often disconnects on Legacy kernel   
    Effectively there was an issue with /boot partition behind too small, it has been fixed in future build https://github.com/armbian/build/pull/2425
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines