SIGSEGV

  • Content Count

    60
  • Joined

  • Last visited

Reputation Activity

  1. Like
    SIGSEGV got a reaction from gprovost in Kobol Team is taking a short Break !   
    Enjoy your time off - looking forward to new products from you guys.
  2. Like
    SIGSEGV got a reaction from iav in Feature / Changes requests for future Helios64 board or enclosure revisions   
    My comment might be late to the party - if there was a possibility to add an optional display and a few user-configurable buttons to the Front Panel, that would be great.
    I know it would mess a bit with the airflow, but it could be used for system monitoring and few other specific use cases.
  3. Like
    SIGSEGV got a reaction from clostro in Feature / Changes requests for future Helios64 board or enclosure revisions   
    My comment might be late to the party - if there was a possibility to add an optional display and a few user-configurable buttons to the Front Panel, that would be great.
    I know it would mess a bit with the airflow, but it could be used for system monitoring and few other specific use cases.
  4. Like
    SIGSEGV got a reaction from allen--smithee in Feature / Changes requests for future Helios64 board or enclosure revisions   
    My comment might be late to the party - if there was a possibility to add an optional display and a few user-configurable buttons to the Front Panel, that would be great.
    I know it would mess a bit with the airflow, but it could be used for system monitoring and few other specific use cases.
  5. Like
    SIGSEGV reacted to aprayoga in SATA issue, drive resets: ataX.00: failed command: READ FPDMA QUEUED   
    Hi all, you could download the SATA firmware update at https://cdn-kobol-io.s3-ap-southeast-1.amazonaws.com/Helios64/SATA_FW/Helios64_SATA_FW_update_00021_200624.img.xz
     
    Instruction:
    1. Download the sd card image
    2. Flash the image into a microSD card
    3. Insert microSD card into Helios64 and power on. Helios64 should automatically boot from microSD. If Helios64 still boot from eMMC, disable the eMMC 
    4. Wait for a while, the system will do a reboot and then power off if firmware flashing succeed.
       If failed, both System Status and System Fault LEDs will blink
    5. Remove the microSD card and boot Helios64 normally. See if there is any improvement.
     
    Our officially supported stock firmware can be downloaded from https://cdn-kobol-io.s3-ap-southeast-1.amazonaws.com/Helios64/SATA_FW/Helios64_SATA_FW_factory_00020_190702.img.xz. If there is no improvement on newer firmware, please revert back to this stock firmware.
     
    SHA256SUM:
    e5dfbe84f4709a3e2138ffb620f0ee62ecbcc79a8f83692c1c1d7a4361f0d30f *Helios64_SATA_FW_factory_00020_190702.img.xz 0d78fec569dd699fd667acf59ba7b07c420a2865e1bcb8b85b26b61d404998c5 *Helios64_SATA_FW_update_00021_200624.img.xz  
  6. Like
    SIGSEGV reacted to wurmfood in UPS service and timer   
    Sigh. Except that doesn't solve the problem. Now it's just cron filling up the log.
     
    New solution, using the sleep option. Modified helio64-ups.service:
    [Unit] Description=Helios64 UPS Action [Install] WantedBy=multi-user.target [Service] #Type=oneshot #ExecStart=/usr/bin/helios64-ups.sh Type=simple ExecStart=/usr/local/sbin/powermon.sh  
    Modified powermon.sh:
    #!/bin/bash #7.0V 916 Recommended threshold to force shutdown system TH=916 # Values can be info, warning, emerg warnlevel="emerg" while [ : ] do main_power=$(cat '/sys/class/power_supply/gpio-charger/online') # Only use for testing: # main_power=0 if [ "$main_power" == 0 ]; then val=$(cat '/sys/bus/iio/devices/iio:device0/in_voltage2_raw') sca=$(cat '/sys/bus/iio/devices/iio:device0/in_voltage_scale') # The division is required to make the test later work. adc=$(echo "$val * $sca /1" | bc) echo "Main power lost. Current charge: $adc" | systemd-cat -p $warnlevel echo "Shutdown at $TH" | systemd-cat -p $warnlevel # Uncomment for testing # echo "Current values:" # echo -e "\tMain Power = $main_power" # echo -e "\tRaw Voltage = $val" # echo -e "\tVoltage Scale = $sca" # echo -e "\tVoltage = $adc" if [ "$adc" -le $TH ]; then echo "Critical power level reached. Powering off." | systemd-cat -p $warnlevel /usr/sbin/poweroff fi fi sleep 20 done  
  7. Like
    SIGSEGV reacted to wurmfood in Crazy instability :(   
    Might help to add to the ZFS page to do something like this when done:
    for disk in /sys/block/sd[a-e]/queue/scheduler; do echo none > $disk; done  
    (Assuming all 5 disks are using ZFS.)
  8. Like
    SIGSEGV reacted to gprovost in Kernel panic in 5.10.12-rockchip64 21.02.1   
    We are coming up soon with a change that will finally improve DVFS stability.
  9. Like
    SIGSEGV got a reaction from allen--smithee in Very noisy fans   
    @allen--smithee Thanks for the feedback.
  10. Like
    SIGSEGV reacted to allen--smithee in Very noisy fans   
    The probleme of this post is resolved !!
    Your Fans with an efficient thermal pad and ssd disks is 22w and 10db peak.
    I very rarely exceed 12w over a Day and the unit is completely inaudible.
    Except for doing these tests which does not at all reflect the reality of normal use. 
    Please Kobol make a 2.5 inch version case for Helios. I think will be perfect for as a solution for my home family. (I think of my children in particular and their father .) 
     
    My ssd and HDDs disks all stay below 40 ° during writing.
  11. Like
    SIGSEGV reacted to allen--smithee in Very noisy fans   
    @ gprovost
    As we might expect, there is a little spread of the pads on the outer edges of the processor, but also on the edge of the radiator
    and so you were right, there is no need to leave the inductors bareheaded.
    Now I think 19x19 is enough for the dimensions of the cpu thermical pad with a 1.5mm. 
    I redid the separation between the CPU Pad and the rest of the board and put pieces of pad in the slots of the inductors. 
    You don't know if it will be possible so, nevertheless now we know what we need for our next blends at home ;).
     
    P6 and P7> PWM = 35
    P6 & P7> PWM = 50 is my new PWMMAX on Fancontrol
    P6 et P7> PWM = 255
     
     
    @SIGSEGV
    If you are interested in this solution.
    I recommend TP-VP04-C 1.5mm for your helios64 (2 plates of 90x50x1.5mm)
     
     
  12. Like
    SIGSEGV reacted to gprovost in Very noisy fans   
    I shared the design of the thermal pad, showing that the whole area below heatsink is cover with thermal pad by default.
     

     
    @allen--smithee That's very valuable feedback, I think we will need to review our approach on thermal pad.
    One factor we had to take in consideration is also ease of installation on the factory line.
  13. Like
    SIGSEGV reacted to allen--smithee in Very noisy fans   
    After one hour of disassembly, cleaning, cutting, installation and reassembly.
    My helios64 has gelid thermal pad GP-Ultimate solutions whose manufacturer boasts a small 15W.mK.
     
    I think I just solved my ventilation problem for this generation and the following ones.
    I will do the tests within a week but for the moment Helios64 remains very cold even under full load.
  14. Like
    SIGSEGV got a reaction from allen--smithee in Feature / Changes requests for future Helios64 board or enclosure revisions   
    Yes, either a smaller SSDs focused version - or a solution to fit more SSD or 2.5 drives with a SATA port multiplier on the same space as the 5 drives, to increase storage density.
    If later revisions of the main CPU board or newer boards can be retro fit into these cases, it would make for a smooth and straight upgrade path.
  15. Like
    SIGSEGV got a reaction from gprovost in NOHZ: local_softirq_pending 08   
    @slymanjojo
    If you don't need the transcoding - the minidlna package could be a good alternative, low resource usage and the performance mostly will depend on your network and the reproduction device.
    Once HW transcoding makes it to the latest kernels, the other packages should behave much better - you're not the only one waiting for it.
  16. Like
    SIGSEGV got a reaction from wurmfood in OpenZFS-2.0.0 Release   
    @roswitina
    Support for OpenZFS is coming, probably v0.8.5 or v0.8.6 will arrive before v2.0.0
     
    In the mean time, if you want to give OpenZFS v2.0.0 a try on the current kernel you can try these scripts to build the debs needed.
    Keep in mind that you would need to compile your files every time you want to upgrade your Armbian installation to match the kernel.
    Building can take around 15-30 minutes - the process will use the RAM based tmpfs so your storage is untouched.
    Credits goes to @wurmfood and @grek for putting together the original versions.
     
    Type exit after compilation finishes to go back to default environment
    Install all 'deb' files under /tmp/chroot-zfs/zfs. with 'dpkg -i'
     
    add_chroot.sh  - run manually as root or sudo user if you like:
     
    build_zfs.sh - place under /build directory of chroot environment.
     
  17. Like
    SIGSEGV reacted to michabbs in [TUTORIAL] First steps with Helios64 (ZFS install & config)   
    Focal / root on eMMC / ZFS on hdd / LXD / Docker
     
    I received my Helios64 yesterday, installed the system, and decided to write down my steps before I forget them. Maybe someone will be interested. :-)
     
    Preparation:
    Assembly your box as described here. Download Armbian Focal image from here and flash it to SD card. You may use Ether. Insert the SD card to your Helios64 and boot. After 15-20s the box should be accessible via ssh. (Of course you have to find out it's IP address somehow. For example check your router logs or use this.)  
    First login:
    ssh root@IP Password: 1234  
    After prompt - change password and create your daily user. You should never login as root again. Just use sudo in the future. :-)
    Note: The auto-generated user is member of "disk" group. I do not like it. You may remove it so: "gpasswd -d user disk".
     
    Now move your system to eMMC:
    apt update apt upgrade armbian-config # Go to: System -> Install -> Install to/update boot loader -> Install/Update the bootloader on SD/eMMC -> Boot from eMMC / system on eMMC  
    You can choose root filesystem. I have chosen ext4. Possibly f2fs might be a better idea, but I have not tested it.
    When finished - power off, eject the sd card, power on. 
    Your system should now boot from eMMC.
     
    If you want to change network configuration (for example set static IP) use this: "sudo nmtui".
    You should also change the hostname:
    sudo armbian-config # Go to: Personal -> Hostname  
     
    ZFS on hard disk:
    sudo armbian-config # Go to Software and install headers. sudo apt install zfs-dkms zfsutils-linux # Optional: sudo apt install zfs-auto-snapshot # reboot  
    Prepare necessary partitions - for example using fdisk or gdisk.
    Create your zfs pool. More or less this way:
    sudo zpool create -o ashift=12 -m /mypool -mypool mirror /dev/disk/by-partuuid/abc123 /dev/disk/by-partuuid/xyz789  
    Reboot and make sure the pool is imported automatically. (For example by typing "zpool status".)
    You should now have working system with root on eMMC and ZFS pool on HDD.
     
    Docker with ZFS:
    Prepare the filesystem:
    sudo zfs create -o mountpoint=/var/lib/docker mypool/docker-root sudo zfs create -o mountpoint=/var/lib/docker/volumes mypool/docker-volumes sudo chmod 700 /var/lib/docker/volumes # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/docker-root sudo zfs set com.sun:auto-snapshot=true mypool/docker-volumes  
    Create /etc/docker/daemon.json with the following content:
    { "storage-driver": "zfs" }  
    Add /etc/apt/sources.list.d/docker.list with the following content:
    deb [arch=arm64] https://download.docker.com/linux/ubuntu focal stable # deb-src [arch=arm64] https://download.docker.com/linux/ubuntu focal stable  
    Install Docker:
    sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io #You might want this: sudo usermod -aG docker your-user  
     
    Voila! Your Docker should be ready! Test it: "docker run hello-world".
     
    Option: Install Portainer:
    sudo zfs create rpool/docker-volumes/portainer_data # You might omit the above line if you do not want to have separate dataset for the docker volume (bad idea). docker volume create portainer_data docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce  
    Go to http://yourip:9000 and configure.
     
     
    LXD with ZFS:
     
    sudo zfs create -o mountpoint=none mypool/lxd-pool sudo apt install lxd sudo lxc init # Configure ZFS this way: Do you want to configure a new storage pool (yes/no) [default=yes]? yes Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, ceph, lvm, zfs) [default=zfs]: zfs Create a new ZFS pool (yes/no) [default=yes]? no Name of the existing ZFS pool or dataset: mypool/lxd-pool [...] #You might want this: sudo usermod -aG lxd your-user # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/lxd-pool sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/containers sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/custom sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/virtual-machines  
    That's it. Lxd should work now on ZFS. :-)
     
  18. Like
    SIGSEGV reacted to youe in PSU voltage pinouts for Helios4   
    I've been using a Helios4 for nearly 2 years now and it's been fantastic.  A couple days ago, it stopped working though.  The serial console revealed a boot loop at "U-Boot SPL" and only showing the disks, but never attempting to boot the OS.  I did some forum searching and discovered the PSU could be the issue.  Using a multimeter, I tested the PSU pins and I get the following (see diagram):

    PIN 1: +2.66V (fluctuates between 2.64-2.66)
    PIN 2: +2.66V (fluctuates between 2.64-2.66)
    PIN 3: GND
    PIN 4: GND
     
    What are the voltages that Helios4 expects for these pins?  Should both 1 & 2 be 12V or should one be 12V and the other 5V?

    I already ordered the following PSU as per another thread (however, this won't arrive for another week)
    https://www.amazon.com/gp/product/B07NCG1P8X/
     
     

  19. Like
    SIGSEGV reacted to youe in PSU voltage pinouts for Helios4   
    Since I had collected as much information from the broken AC adapter brick as possible, I decided to cut off the connector.  This revealed only 2 wires which meant there is only one possible configuration.  Both pins 1 and 2 must be 12V.
     
    I grabbed one of my old computer power supplies, and fashioned a terminal connector to tie the 2 wires to the 12V rail and ground.  I verified the 4 pins on the connector, plugged it in and my Helios4 booted up without any issues.  I'm still waiting about a week and I'll swap out my 500W temporary PSU when it arrives.
     
    For anyone with similar power issues, hopefully this thread may be helpful to them.
     
     
    Here's the appropriate pinouts of the 4-pin AC adapter required for Helios 4 (see previous diagram for pin locations):
    PIN 1: +12
    PIN 2: +12
    PIN 3: GND
    PIN 4: GND
  20. Like
    SIGSEGV got a reaction from iav in Why I prefer ZFS over btrfs   
    This discussion is interesting because I can relate to both sides.
    As an end user - having multiple filesystems to choose from is great, choosing the right one is where I need to spend my time choosing the right tool for the task. Having a ZFS DKMS is great, but if after an update my compiler is missing a kernel header, I won't be able to reach my data. Shipping a KMOD (with the OS release) might give me access to my data after each reboot/upgrade. The ECC/8GB RAM myth is getting old but it has garnered enough attention that most newbies won't read or search beyond the posts with most views.
     
    Armbian as a whole is always improving - a few weeks ago iSCSI was introduced for most boards (and helped me replace two x86_64 servers with one Helios64) - and an implementation of ZFS that works out of the box will be added soon enough. That being said, this is a community project - if a user wants 24x7 incident support, then maybe the ARM based SBCs + Armbian are not the right choice for them. Having a polite discussion with good arguments (like this thread) is what gets things moving forward - We don't have to agree with all the points, but we all want to have stable and reliable systems as our end goal.
  21. Like
    SIGSEGV got a reaction from TRS-80 in The board does not even try to boot on power-on   
    Did you create a bootable microSD card?
    The U-Boot image is not on SPI yet - so it needs it will look for it on the SD and eMMC - if you haven't written the image on either of these you won't get any output on the console.
  22. Like
    SIGSEGV got a reaction from crosser in The board does not even try to boot on power-on   
    Did you create a bootable microSD card?
    The U-Boot image is not on SPI yet - so it needs it will look for it on the SD and eMMC - if you haven't written the image on either of these you won't get any output on the console.
  23. Like
    SIGSEGV reacted to Ammonia in Only HDD 1 & 2 power up   
    I ordered a replacement MOSFET (same model), including a drop-in replacement from vishay with better power ratings (just in case it should happen again). Unfortunately shipping is taking longer than it should. I temporarily bypassed the MOSFET to get all the hard drives to boot up so I could transfer the more important data. I was hoping fixing it would be faster than receiving a replacement board - and for me dealing with UPS is playing Russian roulette. Fixing it by replacing a 13-14 cent component is the better option for us both. Hopefully I won't ruin the board in the process!
  24. Like
    SIGSEGV got a reaction from Borromini in Doubt question about 18Tb Hard Drive   
    Angel,
    Most SATA drives should work out of the box - it's possible that at the time when the specs were written the Kobol team only had access to drives up to 16TB in size.
    18TB drives should be no problem for modern filesystems (EXT4,XFS, ZFS, etc.) - The only real issue you might have would be the power drawn from the power supply at startup.
    Give it a try - connect the drive into the NAS and it should be listed as a device with the command 'lsblk'.
  25. Like
    SIGSEGV got a reaction from umiddelb in Why I prefer ZFS over btrfs   
    This discussion is interesting because I can relate to both sides.
    As an end user - having multiple filesystems to choose from is great, choosing the right one is where I need to spend my time choosing the right tool for the task. Having a ZFS DKMS is great, but if after an update my compiler is missing a kernel header, I won't be able to reach my data. Shipping a KMOD (with the OS release) might give me access to my data after each reboot/upgrade. The ECC/8GB RAM myth is getting old but it has garnered enough attention that most newbies won't read or search beyond the posts with most views.
     
    Armbian as a whole is always improving - a few weeks ago iSCSI was introduced for most boards (and helped me replace two x86_64 servers with one Helios64) - and an implementation of ZFS that works out of the box will be added soon enough. That being said, this is a community project - if a user wants 24x7 incident support, then maybe the ARM based SBCs + Armbian are not the right choice for them. Having a polite discussion with good arguments (like this thread) is what gets things moving forward - We don't have to agree with all the points, but we all want to have stable and reliable systems as our end goal.