Jump to content

yay

Members
  • Posts

    10
  • Joined

  • Last visited

Reputation Activity

  1. Like
    yay reacted to alchemist in vanilla kernel 5.15 and fancontrol   
    Finally OK with vanilla 5.16 kernel.
     
    I have to edit the UDEV rule in order to alias fan1 and fan2 to fan-p6 and fan-p7, see the lines "fan1" and "fan2":
     
    # Helios64 persistent hwmon ACTION=="remove", GOTO="helios64_hwmon_end" # KERNELS=="p6-fan", SUBSYSTEMS=="platform", ENV{_HELIOS64_FAN_}="p6", ENV{_IS_HELIOS64_FAN_}="1", ENV{IS_HELIOS64_HWMON}="1" KERNELS=="p7-fan", SUBSYSTEMS=="platform", ENV{_HELIOS64_FAN_}="p7", ENV{_IS_HELIOS64_FAN_}="1", ENV{IS_HELIOS64_HWMON}="1" KERNELS=="2-004c", SUBSYSTEMS=="i2c", DRIVERS=="lm75", ENV{IS_HELIOS64_HWMON}="1" KERNELS=="thermal_zone0", SUBSYSTEMS=="thermal", ENV{IS_HELIOS64_HWMON}="1" KERNELS=="fan1", SUBSYSTEMS=="platform", ENV{_HELIOS64_FAN_}="p6", ENV{_IS_HELIOS64_FAN_}="1", ENV{IS_HELIOS64_HWMON}="1" KERNELS=="fan2", SUBSYSTEMS=="platform", ENV{_HELIOS64_FAN_}="p7", ENV{_IS_HELIOS64_FAN_}="1", ENV{IS_HELIOS64_HWMON}="1" SUBSYSTEM!="hwmon|thermal", GOTO="helios64_hwmon_end" ENV{HWMON_PATH}="/sys%p" # ATTR{type}=="soc-thermal", ENV{HWMON_PATH}="/sys%p/temp", ENV{HELIOS64_SYMLINK}="/dev/thermal-cpu/temp1_input", RUN+="/usr/bin/mkdir /dev/thermal-cpu/" ATTR{name}=="cpu|cpu_thermal", ENV{IS_HELIOS64_HWMON}="1", ENV{HELIOS64_SYMLINK}="/dev/thermal-cpu" # ENV{IS_HELIOS64_HWMON}=="1", ATTR{name}=="lm75", ENV{HELIOS64_SYMLINK}="/dev/thermal-board" ENV{_IS_HELIOS64_FAN_}=="1", ENV{HELIOS64_SYMLINK}="/dev/fan-$env{_HELIOS64_FAN_}" # ENV{IS_HELIOS64_HWMON}=="1", RUN+="/bin/ln -sf $env{HWMON_PATH} $env{HELIOS64_SYMLINK}" LABEL="helios64_hwmon_end"  
  2. Like
    yay reacted to aprayoga in Kobol Team is pulling the plug ;(   
    Hi everyone, thank you for all of your support.
    I'll be still around but not in full time as before.
     
    I saw Helios64 has several issues in this new release. I will start looking into it.
  3. Like
    yay got a reaction from Demodude123 in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    I didn't notice at first, but this issue was fixed in commit 8eece74e on the master branch (thanks, Piotr!). So everything should just work on beta again. I built my own kernel from master not too long ago and everything is working fine.
  4. Like
    yay got a reaction from Zageron in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    According to your latest blog post:
    That's awesome to hear, thank you very much for addressing this issue! Is there any way to test this change upfront? I'd be eager to give this a spin with a self-built kernel/image if necessary, but couldn't find anything concrete in the public git repositories so far.
  5. Like
    yay got a reaction from Zageron in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    Nvm, I found PR 2567. With those changes applied, I can cheerfully report 2.35Gbps net TCP throughput via iperf3 in either direction with an MTU of 1500 and tx offloading enabled. Again, thanks so much!
     
    One way remains to reproducibly kill the H64's eth1: set the MTU to 9124 on both ends and run iperf in both directions - each with their own server and the other acting as the client. But as the throughput difference is negligible (2.35Gbps vs 2.39Gbps, both really close to the 2.5Gbps maximum), I can safely exclude that use case for myself. I can't recall whether that even worked on kernel 4.4 or whether I even tested it back then.
     
    For whatever it's worth, here's the 5.9 log output from when said crash happens:
    Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2  
  6. Like
    yay got a reaction from aprayoga in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    Nvm, I found PR 2567. With those changes applied, I can cheerfully report 2.35Gbps net TCP throughput via iperf3 in either direction with an MTU of 1500 and tx offloading enabled. Again, thanks so much!
     
    One way remains to reproducibly kill the H64's eth1: set the MTU to 9124 on both ends and run iperf in both directions - each with their own server and the other acting as the client. But as the throughput difference is negligible (2.35Gbps vs 2.39Gbps, both really close to the 2.5Gbps maximum), I can safely exclude that use case for myself. I can't recall whether that even worked on kernel 4.4 or whether I even tested it back then.
     
    For whatever it's worth, here's the 5.9 log output from when said crash happens:
    Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2  
  7. Like
    yay got a reaction from clostro in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    According to your latest blog post:
    That's awesome to hear, thank you very much for addressing this issue! Is there any way to test this change upfront? I'd be eager to give this a spin with a self-built kernel/image if necessary, but couldn't find anything concrete in the public git repositories so far.
  8. Like
    yay reacted to aprayoga in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    We are working on this issue.
    The same Realtek driver working fine on an Ubuntu laptop (amd64), Helios4 (armhf), and Helios64 LK 4.4.
    We are looking into USB host controller driver, it seems there are some quirks not implemented on mainline kernel.
  9. Like
    yay got a reaction from clostro in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    Yes, it's off. "ethtool -K eth1 tx off" produced no output and it gave up just a few seconds after the transfer started. Forcing tx offload on and off again (should we really trust the driver to report the current setting?) didn't improve things. Yesterday I fiddled around with disabling autosuspend on the usb ports and devices in /sys in case it's some weird issue like that - no improvement.
     
    However, I've found that changing the advertised speeds for autonegotiation on my desktop's side (removing the bit for 2.5Gbps, for example) causes the link to go down for a few seconds and then back up - that's picked up by the Helios64's eth1 and it goes back to a stable 1Gbps connection. So at least eth1 can be resurrected without a full NAS reboot in a few seconds' time.
  10. Like
    yay got a reaction from tionebrr in zfs read vs. write performance   
    sync; echo 3 > /proc/sys/vm/drop_caches Check Documentation/sysctl/vm.txt for details on drop_caches.
  11. Like
    yay got a reaction from clostro in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    I've been having the same issue of eth1 just not working right on a 2.5Gbps link on kernel 5.9.x. The same applies for a self-built 5.10.1 kernel and also when going back to r8152 2.13.0 instead of the default 2.14.0 (by adapting the revision in build/lib/compilation-prepare.sh). With a 1Gbps connection, everything is perfectly fine. This connection speed was "forced" by running "ethtool -s enp6s0 advertise 0x3f" on the desktop side, taking 2.5Gbps out of the advertised speeds for autonegotiation. I have a direct 5m Cat6 connection from my Helios64 to my desktop machine. The latter has a 2.5Gbps RTL8125 on the mainboard, using the vanilla kernel's r8169 module. MTUs were kept on 1500 for all tests.
     
    Running the Helios64 on kernel 4.4.213 (via Armbian_20.11.4_Helios64_buster_legacy_4.4.213.img.xz), I was able to transfer 4TiB back and forth in parallel without any issue: Helios64 -> desktop was running with ~1.7Gbps; desktop -> Helios64 was running at ~2.1Gbps. 1.7Gbps aren't great, but ok enough with tx offload disabled - certainly better than "just" 1Gbps (~933, realistically).
     
    One surefire way to kill the Helios64's eth1 is to just start a simple iperf3 run with it sending data to my desktop: "iperf -c <desktop-ip> -t 600". After a few seconds, the speed goes down to 0, the kernel watchdog resets the USB device (as per r8152.c's rtl8152_tx_timeout), it recovers for a few more seconds, and then eth1 is absolutely dead until I reboot the entire NAS. No ping, no more connection, nothing.
     
    Here's what I can see via journalctl -f from a parallel connection when such a crash happens:
     
    If there are any more logs you'd need or software changes to try and apply, I'd be eager to dig deeper.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines