Jump to content

Search the Community

Showing results for tags 'helios64'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Community
    • Announcements
    • Feature Requests
  • Using Armbian
    • Beginners
    • Software, Applications, Userspace
    • Advanced users - Development
  • Upcoming Hardware (WIP)
    • News
    • Odroid M1
    • ROCK 5B
    • Orange Pi 5
  • Maintained Hardware
    • Board does not start
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families
  • Unmaintained (CSC/EOL/TVB) / Other
    • TV boxes
    • Off-topic
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start






Website URL







  1. Getting free() invalid pointer issue when installing / using python3. Could be dpkg issue as well ! When I did sudo apt update && sudo apt upgrade, it happened and hence I tried to reinstall. This issue occurs when trying to manage it with ansible as well. I think something might be wrong with the latest python. Board: Helios64 (I know, I know CSC) Chipset: RK3399 Apt install logs: Reading package lists... Done Building dependency tree... Done Reading state information... Done 0 upgraded, 0 newly installed, 2 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/472 kB of archives. After this operation, 0 B of additional disk space will be used. (Reading database ... 44574 files and directories currently installed.) Preparing to unpack .../python3-pkg-resources_59.6.0-1.2ubuntu0.22.04.1_all.deb ... double free or corruption (out) Aborted dpkg: warning: old python3-pkg-resources package pre-removal script subprocess returned error exit status 134 dpkg: trying script from the new package instead ... dpkg: ... it looks like that went OK Unpacking python3-pkg-resources (59.6.0-1.2ubuntu0.22.04.1) over (59.6.0-1.2ubuntu0.22.04.1) ... Preparing to unpack .../python3-setuptools_59.6.0-1.2ubuntu0.22.04.1_all.deb ... Unpacking python3-setuptools (59.6.0-1.2ubuntu0.22.04.1) over (59.6.0-1.2ubuntu0.22.04.1) ... Setting up python3-pkg-resources (59.6.0-1.2ubuntu0.22.04.1) ... Setting up python3-setuptools (59.6.0-1.2ubuntu0.22.04.1) ... free(): invalid pointer Aborted dpkg: error processing package python3-setuptools (--configure): installed python3-setuptools package post-installation script subprocess returned error exit status 134 Errors were encountered while processing: python3-setuptools E: Sub-process /usr/bin/dpkg returned an error code (1) Ansible logs: ansible_facts: {} failed_modules: ansible.legacy.setup: ansible_facts: discovered_interpreter_python: /usr/bin/python3 failed: true module_stderr: |- free(): invalid pointer Aborted module_stdout: '' msg: |- MODULE FAILURE See stdout/stderr for the exact error rc: 134 msg: |- The following modules failed to execute: ansible.legacy.setup
  2. Last night we lost power to the whole house, I had to manually start the NAS and when I tried to access my data there was nothing but empty shares. After checking OpenMediaVault Web UI and checking around, the file system status shows as "missing" and there is nothing under RAID Management and all the disks show properly under Storage > Disks. The NAS is set up as a Raid 6 on ext4 with five 12 TB drives. For starters, I wonder why the internal battery did not graciously shut down Helios64 as to avoid this from happening given that power outages are a rather common issue. Then, how do I get my file system back? After browsing some forums I noticed this info may seem important: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10.9T 0 disk sdb 8:16 0 10.9T 0 disk sdc 8:32 0 10.9T 0 disk sdd 8:48 0 10.9T 0 disk sde 8:64 0 10.9T 0 disk mmcblk1 179:0 0 14.6G 0 disk └─mmcblk1p1 179:1 0 14.4G 0 part / mmcblk1boot0 179:32 0 4M 1 disk mmcblk1boot1 179:64 0 4M 1 disk cat /etc/fstab UUID=dabb3dbf-8631-4051-a032-c0b97eb285bd / ext4 defaults,noatime,nodiratime,commit=600,errors=remount-ro 0 1 tmpfs /tmp tmpfs defaults,nosuid 0 0 # >>> [openmediavault] /dev/disk/by-label/data /srv/dev-disk-by-label-data ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2 # <<< [openmediavault] cat /etc/mdadm/mdadm.conf # This file is auto-generated by openmediavault (https://www.openmediavault.org) # WARNING: Do not edit this file, your changes will get lost. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts # definitions of existing MD arrays ARRAY /dev/md/helios64:0 metadata=1.2 name=helios64:0 UUID=f2188013:e3cdc6dd:c8a55f0d:d0e9c602 cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sde[4](S) sdc[2](S) sda[0](S) sdd[3](S) sdb[1](S) 58593766400 blocks super 1.2 unused devices: <none> fsck /dev/disk/by-label/ fsck from util-linux 2.33.1 e2fsck 1.44.5 (15-Dec-2018) fsck.ext2: No such file or directory while trying to open /dev/disk/by-label/ Possibly non-existent device? I don't want to lose data even if I have it backed up so I don't want to try to fix it blindly. And given that this could happen again to others and myself it would be very helpful to find a recovery method that others could use.
  3. Dear to all, I'm not expert. How can I add kernel header of zfs to linux kernel 5.15.80-rockchip64? I have run the command "apt install dkms zfs-dkms" and I've got the following output: Reading package lists... Done Building dependency tree... Done Reading state information... Done dkms is already the newest version (2.8.4-3). Suggested packages: debhelper Recommended packages: zfs-zed zfsutils-linux linux-libc-dev The following NEW packages will be installed: zfs-dkms 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 2297 kB of archives. After this operation, 18.0 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://httpredir.debian.org/debian bullseye-backports/contrib arm64 zfs-dk ms all 2.1.6-3~bpo11+1 [2297 kB] Fetched 2297 kB in 0s (5427 kB/s) Preconfiguring packages ... Selecting previously unselected package zfs-dkms. (Reading database ... 78864 files and directories currently installed.) Preparing to unpack .../zfs-dkms_2.1.6-3~bpo11+1_all.deb ... Unpacking zfs-dkms (2.1.6-3~bpo11+1) ... Setting up zfs-dkms (2.1.6-3~bpo11+1) ... Loading new zfs-2.1.6 DKMS files... Building for 5.15.80-rockchip64 Building initial module for 5.15.80-rockchip64 Done. zavl.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ znvpair.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zunicode.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zcommon.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zfs.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ icp.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zlua.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ spl.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zzstd.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ depmod...... DKMS: install completed. Processing triggers for initramfs-tools (0.140) ... update-initramfs: Generating /boot/initrd.img-5.15.80-rockchip64 update-initramfs: Converting to u-boot format if I try to execute "apt install zfsutils-linux" I've got Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: zfsutils-linux : Depends: libnvpair3linux (= 2.1.6-3~bpo11+1) but 2.1.2-1~20.04.york0 is to be installed Depends: libuutil3linux (= 2.1.6-3~bpo11+1) but 2.1.2-1~20.04.york0 is to be installed Depends: libzfs4linux (= 2.1.6-3~bpo11+1) but 2.1.2-1~20.04.york0 is to be installed Depends: libzpool5linux (= 2.1.6-3~bpo11+1) but 2.1.2-1~20.04.york0 is to be installed E: Unable to correct problems, you have held broken packages. Moreover here https://forum.openmediavault.org/index.php?thread/45376-zfs-version/ ryecoaaron has told me "you really need to make sure the kernel headers are installed first. Then install zfs-dkms. Then install the plugin." How can I install prerequisite so after I can install omv6 zfs plugin also? p.s.1: omv6 zfs-armhf should be installed only in 32 bit instruction set of arm? p.s.2: to mod: if section if thread is wrong please move to right section.
  4. Hi there ... Not sure if anyone has an idea here, but I have the following topic: I have a helios64, equipped with 3 2.5" disk drives, running Debian 10 with Kernel 5.10.43. Usually the system runs stable ... even for days (24/7). But every now and then the disks "fail" - which means, during normal operation suddenly the disk turn of, linux reports "sata link down" ... and the disks are off. As if they were removed. If I reboot I usually here only some "klicks" ... I assume the disks try to spin up, but fail due to not enough power (?). To recover from that situation, I usually disconnect power (incl. the internal UPS) ... try a few times, and sooner or later disks power successfully up again. Then the system run's again for a while without any issues. I already tried to re-cable everything, trying to ensure that the cables are properly connected, but this does not seem to help. I assume that there is an hardware issue with the SATA power rails (both show the same effect). For that reason I'm looking for the schematics, to trace that a little deeper ... anyone an idea if they are somewhere available? Thanks!
  5. Hello, when i installed the helios64 initially i had the problem with the fans spinning at 100% (or really loud) and i solved it somehow. It was working fine with the fans spinning for one second when starting and shutting down the device. Now i finally updated the NAS with sudo apt update sudo apt upgrade sudo omv-release-upgrade sudo reboot and after the reboot the fans are now spinning really loud again all the time. Is anyone aware of an easy solution or a quick fix? Someone else had the same problem at https://forum.openmediavault.org/index.php?thread%2F42550-fans-go-on-full-and-stay-there-after-doing-an-update-in-omv-web-control-panel-he%2F= but it seems he either did not solve it or he just forgot to report back. I'm not sure if is related to my problem. alking about changing the fans but mentions the PWM Fan control again. On reddit someone mentions these settings again. I checked and my fancontrol settings seem to be identically or pretty close. In another reddit thread it was mentioned to "I went to armbian-config => System => CPU, set minimum and maximum CPU speed at 1200000 and set "governor" to "performance".". But its just "Fan too loud" and not "Fan spinning at 100%" So, i wanted to ask you guys if you know a fix for that problem 😕
  6. Dear all; I'm looking for some help with my Helios64 NAS, since I update kernel to 5.15, zfs-dkms won't work. Which is normal, since the most recent version available for Helios64 of zfs-dkms is 2.0.3-1~bpo10+1 (tested on 1st April 2022), and this version is only compatible with kernel from 3.10 to 5.10. Then the first solution that I thought was to downgrade to 5.10 through the armbian-config tool, however when I'm trying to install the linux-headears (through armbian-config) I'm having the herdears of 5.15 (same issue with apt install linux-headers-current-rockchip64) while zfs-dkms looking for 5.10. Preparing to unpack .../zfs-dkms_2.0.3-1~bpo10+1_all.deb ... Unpacking zfs-dkms (2.0.3-1~bpo10+1) ... Setting up zfs-dkms (2.0.3-1~bpo10+1) ... Loading new zfs-2.0.3 DKMS files... It is likely that 5.10.63-rockchip64 belongs to a chroot's host Building for 5.15.25-rockchip64 Building initial module for 5.15.25-rockchip64 configure: error: *** None of the expected "capability" interfaces were detected. *** This may be because your kernel version is newer than what is *** supported, or you are using a patched custom kernel with *** incompatible modifications. *** *** ZFS Version: zfs-2.0.3-1~bpo10+1 *** Compatible Kernels: 3.10 - 5.10 Error! Bad return status for module build on kernel: 5.15.25-rockchip64 (aarch64) Consult /var/lib/dkms/zfs/2.0.3/build/make.log for more information. /var/lib/dkms/zfs/2.0.3/build/make.log: DKMS make.log for zfs-2.0.3 for kernel 5.15.25-rockchip64 (aarch64) Fri 01 Apr 2022 05:38:22 PM UTC make: *** No targets specified and no makefile found. Stop. I try then to perform a fresh installation of the system by downloading Armbian_21.08.2_Helios64_buster_current_5.10.63, and then it goes worst since I can't download the header through armbian-config (nothing happen), and apt install linux-headers-current-rockchip64 keep installing the sources for 5.15 ('/usr/src/linux-headers-5.15.25-rockchip64'). So the problem how I see it is either to: * Get the linux-headers of the previous kernel. * obtaining a zfs-dkms version compatible with 5.15 Thank you in advance for your help. PS: I am aware of the docker alternative, but I prefer to use zfs-dkms.
  7. Dear Forum, I am using a Helios64 with armbian Debian 10 (Buster) Linux-Kernel 5.15.52 and OMV 5.6.26-1. If I want to have further Updates from OMV I need to upgrade to Debian 11, because OMV6 isn't compatible to Buster. Any recomendations how I should upgrade? The H64-System is on a SDD. Should I try a cli system upgrade or installing everything new with a bullseye image, if this is available (Where?)? Best wishes, Greg
  8. Dear to all, I'm new on linux environment so be patience with me. I have a helios64 unit with installed Openmediavault 6. When I reboot or turn-off helios64 (both from webgui interface that from power button) the hard disk ( 2 wd60efrx) make a cute sound from head. How can I check that heads are parked correctly? I don't want damage my hard disks Best regards
  9. Upgrading Helios64 from Armbian Buster to Bullseye (see below) works as expected on my system. However, I am using systemd-networkd and just a few services (nextcloud, netatalk, etc. and not ZFS) EDIT: Upgrading Buster installations to Bullseye also works fine if you use network-manager, even if you have a bridge configured (using bridge-slave; binutils-bridge). # cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" _ _ _ _ __ _ _ | | | | ___| (_) ___ ___ / /_ | || | | |_| |/ _ \ | |/ _ \/ __| '_ \| || |_ | _ | __/ | | (_) \__ \ (_) |__ _| |_| |_|\___|_|_|\___/|___/\___/ |_| Welcome to Armbian 21.08.1 Bullseye with Linux 5.10.43-rockchip64 System load: 2% Up time: 12:29 Memory usage: 19% of 3.77G IP: xx.xx.xx.xx CPU temp: 42°C Usage of /: 41% of 15G storage/: 57% of 3.6T Edit: Attention - if you upgrade your Buster or Bullseye installation on emmc to Armbian 21.08.1 it will not be writable anymore. You will then have to downgrade linux on emmc from 5.10.60 to 5.10.43 as described in this thread. Edit: There is a temporary fix for the problem. See this message from @piter75 To upgrade Armbian Buster to Bullseye, first disable Armbian updates in /etc/apt/sources.list.d/Armbian.list for the time being. Then fully upgrade Buster (sudo apt update && sudo apt upgrade -y) , then change the apt sources (see below) followed by 'sudo apt update && sudo apt full-upgrade'. I kept all the configuration files by confirming 'N' in the following dialogue. # cat /etc/apt/sources.list deb http://deb.debian.org/debian/ bullseye main deb-src http://deb.debian.org/debian/ bullseye-updates main deb http://security.debian.org/debian-security bullseye-security main deb-src http://security.debian.org/debian-security bullseye-security main deb http://ftp.debian.org/debian bullseye-backports main
  10. As stated in my comments here (but it seems no one is reading this blog post comments anymore ...) : https://blog.kobol.io/2020/10/27/helios64-software-issue/ - With 5.8.17 i have kernel panics. - With 5.8.14 i have freeze. So what to do in order to have a stable situation ?! It seems to be "under load" (one freeze during RAID array being built, and several ones (with .14 or .17 kernels) while files were being copied through the 1Gb/s network interface). - System newly installed and running on fresh SSD card. - 5*3.5" WD HDD plugged in / RAID5 mdadm array (so no M.2, ...) - nothing else done on the NAS (nothing has been installed on the OS, no processes in memory outside raid + rsync or scp file copying through SSH)
  11. In case someone is interested in a almost new Helios 64 "full bundle" with a lot of spare parts that did only run for about 3 months: https://www.ebay.de/itm/363567434154 Thanks, Paul
  12. I accidentally ripped off a capacitor from the back of the Helios64. This missing capacitor is right below the SATA connector. I searched for the schematic of the board without any success. Does anyone happen to know the value and size of this capacitor?
  13. I'm sick of having to play reset-it-until-it-boots-then-it-reboots-later game, anyone know a similar motherboard I can use in place of the helios64 with minimal extra messing around? I totally understand that the front and back panel will be a problem to set up, I just don't want to have to hack the main case part, and have support for the five SATA ports.
  14. wurmfood

    Zram questions

    I'm trying to understand the role of zram and having some difficulties with it. I've looked in several places and my understanding is this: - It's a block device in RAM that uses compression - It functions similar to swap I notice that mine is set to about 2 Gb: wurmfood@helios64:~$ cat /proc/swaps Filename Type Size Used Priority /dev/zram0 partition 1945084 0 5 My questions: 1. Does that mean 2 Gb of the device RAM is used for the zram swap device? 2. I can understand this on devices with slower storage, but would it be reasonable to disable this and instead use swap on an SSD?
  15. Hello, I have recently bought a Helios64 item. I have mounted it, I have done all steps written on wiki here https://wiki.kobol.io/helios64/kit/ and here https://wiki.kobol.io/helios64/install/sdcard/ but when I try to follow this steps https://wiki.kobol.io/helios64/install/first-start/ USB to serial bridge is not detected by my notebooks (I have tried with notebook 1 (windows10 x64 with drivers downloaded and installed with exe setup) and notebook 2 (x86 cpu with last kernel ubuntu and windows xp)). I have unmounted the backplane cover, jumper P10, P11 and P13 (see here jumper description https://wiki.kobol.io/helios64/jumper/ ) are all opened so my question is: what is the problem? p.s. 1: When I have followed this step https://wiki.kobol.io/helios64/install/sdcard/ when i plug the power connector automatically system has booted without that I press power bottom (in the link at step 4 is written "Don't power-up the Power Adapter yet") ... something has been happened that cause the not detection on my notebooks? p.s. 2: the USB to serial bridge can be detected from notebooks with system that is turn off with only plug power connector plugged (with only LED1 lighted https://wiki.kobol.io/helios64/led/)? I make this question because here https://www.youtube.com/watch?v=58coL23Bzzw the dude first turn on helios 2:05, after when is linked with usb and execute picocom command helios is turn off 2:08 and finally he turn on Helios device 2:10 p.s. 3: when I realize that the USB to serial bridge isn't detected by my notebook, I turn off the device from power button. Sometimes is necessary only press button for 1 second and device is turn off, other times I press button for 5 seconds... but in the last case I hear a whistle... something can be damaged in the second scenario? p.s. 4: if I must buy jumper shorting, the spec of 2.54 mm is right? Hoping that somebody help me Best regards
  16. Hello, so we use Kobol 64 with OMV since 4 months, and no problem at all. Just yesterday, want to change Ethernet cable, correctly shutdown the Helios64 NAS, and impossible to start it again. Just have the right system light, but left LED no blinking, no information in the USBC terminal. For the "recovery", press the recovery button, and then press the power button, and the LED System don't light up. What can we do, format the system, and reinstall OMV with all parameters... Not really serious, ins't it ? Thank you for any help. Regard.
  17. Feels like this should be on here so everyone knows https://blog.kobol.io/2021/08/25/we-are-pulling-the-plug/
  18. I've been having a recent issue over the last couple months where my ArmbianEvn.txt file is being overwritten thus causing Helios to hang in the boot process. Using some samples i've found on these forums, I was able to re-create my file and get the Helios back up and running and have since stored a copy so that when it happens, I can just copy and paste and back up running in no time. To date, i've had this happen to me 4 times. I'm not worried about having go through this at this point in time, but I was curious if anyone else has seen this issue or not? I've saved a couple samples of what was written into the ArmbianEnv.txt file: /var/log/armbian-hardware-monitor.log { rotate 12 weekly compress missi /var/log/alternatives.log { monthly rotate 12 compress delaycompress missingok notifempty create 644 root root } I'm running Buster 5.10.21. I have 5-14TB EXOS drives in Raid 6 and have OMV as the only thing installed.
  19. After a recent update to non-kernel stuff, my Helios64 will no longer boot. When attempting to, I get this error: [ 32.425508] xhci-hcd xhci-hcd.3.auto: Host halt failed, -110 [ 35.019496] xhci-hcd xhci-hcd.3.auto: Host halt failed, -110 [ 35.020028] xhci-hcd xhci-hcd.3.auto: Host controller not halted, aborting reset. I had been using the eMMC on board to boot with the SD card as a backup, but now neither one is working. Anyone know if there's a way to fix this? Here's more, using verbosity=4 in the armbianEnv.txt.
  20. Hello, I have been holding onto kernel 5.10 (5.10.21-rockchip64 to be exact) for a while as it has been pretty stable for me and also because I heard bad feedback from newer kernels on the Helios 64. But with the recent discovery of the DirtyPipe exploit (which has been introduced in Linux 5.8), I might be looking to upgrade to the latest Armbian kernel (which contains a fix to the vulnerability). So, my question to fellow Helios64 users is: have you encountered any issue with kernel 5.15, especially when using OpenMediaVault 5, Docker, MergerFS and NFS?
  21. Fresh build edge bullseye image with kernel Linux helios 5.18.0-rockchip64 #trunk SMP PREEMPT Sun May 29 20:19:27 EEST 2022 aarch64 GNU/Linux fancontrol not work, because there no /dev/fan devices How to get fancontrol back to work?
  22. Hello, I received recently an Helios64 from a new friend, However, when I get home, I plug it in, turn it on (already I have to press the power button for 4 seconds to get some life) and I get the following LEDs lit up: - LED1 - LED2 - LED3 - LED5 - LED6 i got nothing on the serial line (tested on MacOS and Windows), and no reaction when I do quick and long press on Recovery nor Reset. Is there some electronic schematic that could help me to debug hardware or procedure to get JTAG access to the RK3399 ? Regards,
  23. Hey guys. I have helios64 with 5x WD14tb red pro.... and ZFS fs. but currently im out of free space I want to extend somehow my storage pool. I'm wonder if it's possible to buy additional storage enclosure like from QNAP ( TL-D800C) - JBOD 1 x USB 3.2 Gen 2 type C and put additional 8x HDD I wanted to create additional ZFS pool raidz1 Do You think it will work?
  24. Hello, My NAS becomes unresponsive every time I run a somewhat CPU-intensive task: indexing photos in photoprism, importing followings in Mastodon… or just browsing Mastodon. Is it expected of the Rockchip RK3399 or can I improve the situation with the right CPUfreq config ? For context, here is my CPUfreq config and kernel (I have yet to update): axel@helios64:~$ cat /etc/default/cpufrequtils ENABLE="true" GOVERNOR=performance MAX_SPEED=1200000 MIN_SPEED=1200000 axel@helios64:~$ uname -a Linux helios64 5.10.21-rockchip64 #21.02.3 SMP PREEMPT Mon Mar 8 01:05:08 UTC 2021 aarch64 GNU/Linux Thanks!
  • Create New...