clostro

  • Content Count

    29
  • Joined

Reputation Activity

  1. Like
    clostro reacted to gprovost in Kobol Team is taking a short Break !   
    It’s been 3 months since we posted on our blog. While we have been pretty active on Armbian/Kobol forum for support and still working at improving the software support and stability, we have been developing in parallel the new iteration of Helios64 around the latest Rockchip SoC RK3568.
     
    However things haven’t been progressing as fast as we would have wished. Looking back, 2020 has been a very challenging year to deliver a new product and it took quite a toll on the small team we are. Our energy level is a bit low and we still haven’t really recovered. Now with electronic part prices surge and crazy lead time, it’s even harder to have business visibility in an already challenging market.
     
    In light of the above, we decided to go on a full break for the next 2 months, to recharge our battery away from Kobol and come back with a refocused strategy and pumped up energy.
     
    Until we are back, we hope you will understand that communication on the different channels (blog, wiki, forum, support email) will be kept to a minimum for the next 2 months.
     
    Thanks again all for your support.
  2. Like
    clostro reacted to nquinn in New cpu anytime soon?   
    Very interested in the Helios64, but can't help but to notice the rockchip cpu in it is 5+ years old at this point.
     
    Any plans to upgrade it to a more modern cpu?
     
    At this price range I'd probably lean towards an Odroid H2+ with a celeron J1114 chip which has like 2-3x the multicore performance and a similar TDP
  3. Like
    clostro reacted to SIGSEGV in Feature / Changes requests for future Helios64 board or enclosure revisions   
    My comment might be late to the party - if there was a possibility to add an optional display and a few user-configurable buttons to the Front Panel, that would be great.
    I know it would mess a bit with the airflow, but it could be used for system monitoring and few other specific use cases.
  4. Like
    clostro reacted to aprayoga in UPS service and timer   
    @SIGSEGV @clostro the service is triggered by udev rules.
     
    @wurmfood, I didn't realize the timer fill the the log every 20s. Initial idea was one time timer, poweroff after 10m of power loss event. Then it was improved to add polling of the battery level and poweroff when threshold reached. Your script look good, we are considering to adapt it to official release. Thank you
     
     
  5. Like
    clostro reacted to wurmfood in UPS service and timer   
    Sigh. Except that doesn't solve the problem. Now it's just cron filling up the log.
     
    New solution, using the sleep option. Modified helio64-ups.service:
    [Unit] Description=Helios64 UPS Action [Install] WantedBy=multi-user.target [Service] #Type=oneshot #ExecStart=/usr/bin/helios64-ups.sh Type=simple ExecStart=/usr/local/sbin/powermon.sh  
    Modified powermon.sh:
    #!/bin/bash #7.0V 916 Recommended threshold to force shutdown system TH=916 # Values can be info, warning, emerg warnlevel="emerg" while [ : ] do main_power=$(cat '/sys/class/power_supply/gpio-charger/online') # Only use for testing: # main_power=0 if [ "$main_power" == 0 ]; then val=$(cat '/sys/bus/iio/devices/iio:device0/in_voltage2_raw') sca=$(cat '/sys/bus/iio/devices/iio:device0/in_voltage_scale') # The division is required to make the test later work. adc=$(echo "$val * $sca /1" | bc) echo "Main power lost. Current charge: $adc" | systemd-cat -p $warnlevel echo "Shutdown at $TH" | systemd-cat -p $warnlevel # Uncomment for testing # echo "Current values:" # echo -e "\tMain Power = $main_power" # echo -e "\tRaw Voltage = $val" # echo -e "\tVoltage Scale = $sca" # echo -e "\tVoltage = $adc" if [ "$adc" -le $TH ]; then echo "Critical power level reached. Powering off." | systemd-cat -p $warnlevel /usr/sbin/poweroff fi fi sleep 20 done  
  6. Like
    clostro reacted to wurmfood in Migrate from ramlog to disk   
    Well, for anyone else interested in trying this, here's the basic order I did:
    stop armbian-ramlog disable armbian-ramlog create a zfs dataset and mount it at /var/log cp -ar everything from /var/log.hdd to the new /var/log modify /etc/logrotate to disable compression (since the dataset is already using compression) modify /etc/default/armbian-ramlog to disable it there as well modify /etc/default/armbian-zram-config to adjust for new numbers (I have ZRAM_PERCENTAGE and MEM_LIMIT_PERCENTAGE at 15). reboot
  7. Like
    clostro got a reaction from gprovost in How to do a full hardware test?   
    May I suggest outputting dmesg live to a network location?
    I'm not sure if the serial console output is the same as 'dmesg' but if it is, you can live 'nohup &' it to any file. That way you wouldn't have to keep connected to console or ssh all the time. Just don't output it to any local file system as writing to a local file system at a crash might corrupt it and cause more problems.
     
    nohup dmesg --follow > /network/location/folder/helios64-log.txt & 2>&1
    exit
     
    needed to have single >, and exit the session with 'exit' apparently..
  8. Like
    clostro reacted to wurmfood in Does anyone actually have a stable system?   
    Ah! Sorry, yes, you're right. I've been looking at these back and forth for the last few days. A lot of people here have been having trouble with WD drives and at least some of that seems to come down to the difference between the two types of drives.
  9. Like
    clostro reacted to dieKatze88 in Feature / Changes requests for future Helios64 board or enclosure revisions   
    How about a backplane instead of a nest of wires?
  10. Like
    clostro reacted to gprovost in Feature / Changes requests for future Helios64 board or enclosure revisions   
    @dieKatze88 Yes this is already been announced here and there that we will replace the wire harness by a proper PCB backplane. There will still be wire tough connecting the main board to the backplane since we don't want a board that can only be used with a specific backplane. But these wires will be normal SATA cables, so easy to buy new ones anywhere if replacement is needed.
  11. Like
    clostro reacted to scottf007 in ZFS or normal Raid   
    Hi All, 

    I am on my Helios64 (already have a Helios4). 

    I am running 2 x 8TB, 1x 256 M2, OMV5 debian etc.
     
    I have ZFS running (from the wiki instructions) in a basic mirror format, and if I add a few drives I think it would still be a mirror, however I have the following question. It seems far more complicated than MDADM, and potentially more unstable (not in terms of the underlying technology, but in terms of changing these kernals etc every week). 

    Do you think for an average home user that this is really worth it or will constantly need work? To change it would mean starting again, but I do not want to change the setup every month and would intend to keep it stable for media, and backing up some stuff for the next few years. Or if it is working it should be fine for the long term?
     
    Cheers
    Scott
  12. Like
    clostro reacted to ShadowDance in Helios64 - freeze whatever the kernel is.   
    @jbergler I recently noticed the armbian-hardware-optimization script for Helios64 changes the IO scheduler to `bfq` for spinning disks, however, for ZFS we should be using `none` because it has it's own scheduler. Normally ZFS would change the scheduler itself, but that would only happen if you're using raw disks (not partitions) and if you import the zpool _after_ the hardware optimization script has run.
     
    You can try changing it (e.g. `echo none >/sys/block/sda/queue/scheduler`) for each ZFS disk and see if anything changes. I still haven't figured out if this is a cause for any problems, but it's worth a shot.
  13. Like
    clostro got a reaction from Werner in Hardrives newbie question   
    @kelsoAny SSD will be fast enough for the network or USB speeds of this device. If you are buying new you can pick WD red, Samsung Evo, Sandisk Ultra/Extreme, Seagate firepro (?) ... just stay away from little or no known brands. You can check the models you picked and compare them here - https://ssd.userbenchmark.com/
    They are getting a well deserved grilling for their CPU comparisons but I think their SSD data is good enough. I would be looking for the best 'Mixed' value for performance for use in this device, as the network and USB speeds are capping the max read or write speed anyhow.
     
    Western Digitals you picked use CMR, which is supposedly much better than SMR, can take a look at this table if you have to pick other models. https://nascompares.com/answer/list-of-wd-cmr-and-smr-hard-drives-hdd/
     
    One suggestion before you put any critical data into those brand new disks- Run 'smart' tests on each drive, long tests. Should take about 10 hours I think. 
     
    One of my brand new Seagates couldn't complete a test at first run and had to be replaced. Now I'm on edge and running nightly borg backups to an older NAS because the other disks are from the same series. Call me paranoid but I usually stagger HDD purchases by a few days in between, and/or order from different stores to avoid having them from the same production batch, couldn't do that this time around.
     
     
    @WernerI use 4 4TB Seagate NAS drives, whatever their branding was.. And an old 240GB Sandisk Extreme SSD for Raid caching. LVM Raid 5 + dm-cache(read and write, writeback, smq). It ain't bad. SSD really picks up the slack of the spinning rust especially when you are pushing random writes to the device, and the smq is pretty smart at read caching for hot files.
  14. Like
    clostro reacted to slymanjojo in Helios64 - freeze whatever the kernel is.   
    @gprovost
    Been Running stable since  21.02.3 Buster.
    cat /etc/default/cpufrequtils
    ENABLE=true
    MIN_SPEED=408000
    MAX_SPEED=1800000
    GOVERNOR=ondemand
     
    No VDD tweaks.
     

     
  15. Like
    clostro reacted to Seneca in Helios64 - freeze whatever the kernel is.   
    Just an update from yesterday, no freezes or crashes yet, even though quite heavy IO and CPU.
  16. Like
    clostro got a reaction from tionebrr in Mystery red light on helios64   
    Just wanted to report that the CPU frequency mod has been running stable under normal use for 15 days now (on 1Gbe connection). Haven't tried the voltage mod.
     
    I'll switch to the February 4th Buster 5.10 build soon.
     
     
    edit: 23 days, and I shut it down for sd card backup and system update. cpu freq mod is rock solid.
  17. Like
    clostro got a reaction from Gareth Halfacree in Feature / Changes requests for future Helios64 board or enclosure revisions   
    @Gareth Halfacree If you would be interested in FreeBSD, @SleepWalker was experimenting with it some time ago. Maybe still is.
     
  18. Like
    clostro reacted to Gareth Halfacree in Encrypted OpenZFS performance   
    Your MicroServer has either an Opteron or Ryzen processor in it, either one of which is considerably more powerful than the Arm-based RK3399.
     
    As a quick test, I ran OpenSSL benchmarks for AES-256-CBC on my Ryzen 2700X desktop, an older N54L MicroServer, and the Helios64, block size 8129 bytes.
     
    Helios64: 68411.39kB/s.
    N54L: 127620.44kB/s.
    Desktop: 211711.31kB/s.
     
    From that, you can see the Helios64 CPU is your bottleneck: 68,411.39kB/s is about 67MB/s, or within shouting distance of your 62MB/s real-world throughput - and that's just encryption, without the LZ4 compression overhead.
     
  19. Like
    clostro reacted to allen--smithee in Very noisy fans   
    @ gprovost
    As we might expect, there is a little spread of the pads on the outer edges of the processor, but also on the edge of the radiator
    and so you were right, there is no need to leave the inductors bareheaded.
    Now I think 19x19 is enough for the dimensions of the cpu thermical pad with a 1.5mm. 
    I redid the separation between the CPU Pad and the rest of the board and put pieces of pad in the slots of the inductors. 
    You don't know if it will be possible so, nevertheless now we know what we need for our next blends at home ;).
     
    P6 and P7> PWM = 35
    P6 & P7> PWM = 50 is my new PWMMAX on Fancontrol
    P6 et P7> PWM = 255
     
     
    @SIGSEGV
    If you are interested in this solution.
    I recommend TP-VP04-C 1.5mm for your helios64 (2 plates of 90x50x1.5mm)
     
     
  20. Like
    clostro got a reaction from gprovost in Mystery red light on helios64   
    Just wanted to report that the CPU frequency mod has been running stable under normal use for 15 days now (on 1Gbe connection). Haven't tried the voltage mod.
     
    I'll switch to the February 4th Buster 5.10 build soon.
     
     
    edit: 23 days, and I shut it down for sd card backup and system update. cpu freq mod is rock solid.
  21. Like
    clostro reacted to barnumbirr in [SOLVED] Helios64 won't boot after latest update   
    SOLVED!!!

    Of course this was due to armbianEnv.txt being corrupt. Restoring the file from backups allows the device to boot cleanly again.
  22. Like
    clostro reacted to ShadowDance in SATA issue, drive resets: ataX.00: failed command: READ FPDMA QUEUED   
    Hey, sorry I haven't updated this thread until now.
     
    The Kobol team sent me, as promised, a new harness and a power-only harness so that I could do some testing:
    Cutting off capacitors from the my original harness did not make a difference The new (normal) harness had the exact same issue as the original one With the power-only harness and my own SATA cables, I was unable to reproduce the issue (even at 6 Gbps) Final test was to go to town on my original harness and cut the connector in two, this allowed me to use my own SATA cable with the original harness and there was, again, no issue (at 6 Gbps) Judging from my initial results, it would seem that there is an issue with the SATA cables in the stock harness. But I should try to do this for a longer period of time -- problem was I didn't have SATA cables for all disks, once I do I'll try to do a week long stress test. I reported my result to the Kobol team but haven't heard back yet.
     
    Even with the 3.0 Gbps limit, I still occasionally run into this issue with the original harness, has happened 2 times since I did the experiment.
     
    If someone else is willing to repeat this experiment with a good set of SATA cables, please do contact Kobol to see if they'd be willing to ship out another set of test harnesses, or perhaps they have other plans.
     
    Here's some pics of my test setup, including the mutilated connector:
     

  23. Like
    clostro reacted to gprovost in Feature / Changes requests for future Helios64 board or enclosure revisions   
    RK3399 has a single PCIe 2.1 x 4 lanes port, you can't split the lanes for multi interfaces unless you use a PCIe switch which is an additional component that would increase a lot the board cost.
     
    That's clearly the intention, we don't want either to restart form scratch the software and documentation :P
     
    Yes this will be fixed, it was a silly design mistake. We will also make the back panel bracket holder with rounder edges to avoid user to scratch their hands :/
     
    We will post soon how to manage fan speed based on HDD temperature (using hddtemp tool), that would make more sense than current approach.
    You can already find an old example : https://unix.stackexchange.com/questions/499409/adjust-fan-speed-via-fancontrol-according-to-hard-disk-temperature-hddtemp
     
  24. Like
    clostro got a reaction from gprovost in Feature / Changes requests for future Helios64 board or enclosure revisions   
    - I don't see the point of baking a m.2 or any future nvme PCI-e port on the board honestly. Instead I would love to see a PCI-e slot or two exposed. That way end user can pick what expansion is needed. It can be an nvme adapter, can be 10gbe network, or SATA or USB or even a SAS controller for additional external enclosures. It will probably be more expensive for the end user but will open up a larger market for the device. We already have 2 lanes/10gbps worth of PCIe not utilized as is (correct me on this).
    - Absolutely more/user upgradeable and ECC ram support.
    - Better trays, you guys they are a pain right now lol. ( Loving the black and purple though )
    - Some sort of active/quiet cooling on the SoC separate from the disks, as any disk activity over the network or even a simple CPU bound task ramps up all the fans immediately.
    - Most important of all, please do not stray too far from the current architecture both hardware and firmware wise. Aside from everything else, we would really love to see a rock solid launch next time around. Changing a lot of things might cause most of your hard work to-date to mean nothing.
  25. Like
    clostro reacted to Salamandar in Feature / Changes requests for future Helios64 board or enclosure revisions   
    Weird, that's not something I decided when setting up the post.
     
     
     
    Well I'll just add "support announced features" Also the 2.5GB connector is already fixed in batch 2.
     
    I have a zraid (4 HDD), never had an issue. You may need to contact Kobol on this.
     
    True, I didn't think of that. Thanks.