Jump to content

Search the Community

Showing results for tags 'helios64'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Armbian
    • Armbian project administration
  • Community
    • Announcements
    • SBC News
    • Framework and userspace feature requests
    • Off-topic
  • Using Armbian
    • Beginners
    • Software, Applications, Userspace
    • Advanced users - Development
  • Standard support
    • Amlogic meson
    • Allwinner sunxi
    • Rockchip
    • Other families
  • Community maintained / Staging
    • TV boxes
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families
  • Support

Categories

  • Official giveaways
  • Community giveaways

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Matrix


Mastodon


IRC


Website URL


XMPP/Jabber


Skype


Github


Discord


Location


Interests

  1. Hey guys, I figured it was time to ask again... Does doing an `apt full-upgrade` from Buster to Bullseye break anything like before a year ago? I kind of want to do it, but not if the eMMC issues are still around. I really prefer not using the slower SD card method. And i also want to ask about stability for people that still use a Kobol with Armbian on Bullseye. Any pertinent issues with anything? A few people on here have claimed that they have gotten the vanilla Debian kernel to run on the Helios64, but I'm not sure how... Sure would be great if the vanilla Debian distro worked on this board, but I doubt that'll ever happen... I love my Kobol, and I am going to have a hard time finding a good replacement for it. i just really like having full CLI access, complete understanding of all the hardware, little to no proprietary bits, and also the open-ness of the hardware. It's all great. But man, finding alternatives NASes is going to be hard for me now. Hahaha. (Please ignore the fact that I posted this on April Fools Day. That's just a coincidence).
  2. My Helios64 NAS is suddenly randomly just, stopping. It appears to be totally random. It doesn't appear to be some form of actual shutdown as the front panel lights all stay on. I lose all access including the serial console. The only way to reboot is a long press on the power button to actually physically reboot the box. I've cranked the verbosity up to 7 in /boot/armbianEnv.txt in case something is output on the serial console but I've not seen any useful output yet but for obvious reasons it's hard to catch it happening. Because /var/log is cleared every boot I can't see if anything was logged just prior to the halt event. Not sure where to start trying to diagnose. One option is to remove the folder2ram mount for /var/log to persist logs? output of "uname -a" in case it helps Linux helios64 5.10.43-rockchip64 #21.05.4 SMP PREEMPT Wed Jun 16 08:02:12 UTC 2021 aarch64 GNU/Linux Kobol folks: I'm in Singapore if that helps any.
  3. Dear to all, I'm new on linux environment so be patience with me. I have a helios64 unit with installed Openmediavault 6. When I reboot or turn-off helios64 (both from webgui interface that from power button) the hard disk ( 2 wd60efrx) make a cute sound from head. How can I check that heads are parked correctly? I don't want damage my hard disks Best regards
  4. So I did the upgrade to OMV6 (and associated kernel update Linux 5.15.93-rockchip64). Now in the mornings or after long periods of no interaction, the Helios enters an unresponsive state where none of the docker containers respond and I can no longer ssh in. There is still some sporadic noise comming from hard drives doing things and lights on the front are showing it's not completely frozen. I suspect it's attempting to enter some kind of suspend / hibernate state that is not appropriate for this system. Which would be the appropriate log file to go hunting for answers for sleep/hibernate issues on Armbian? And what would be the recommended way to configure the available sleep options? Thanks.
  5. Dear Forum, I am using a Helios64 with armbian Debian 10 (Buster) Linux-Kernel 5.15.52 and OMV 5.6.26-1. If I want to have further Updates from OMV I need to upgrade to Debian 11, because OMV6 isn't compatible to Buster. Any recomendations how I should upgrade? The H64-System is on a SDD. Should I try a cli system upgrade or installing everything new with a bullseye image, if this is available (Where?)? Best wishes, Greg
  6. Hi, I had to change the Rack 1 HDD because I got an email telling me that this disk was broken. So I ordered new discs to make my replacement. When I put my new disk my problem started. For information my old disk was perfectly visible before I changed it and the led was functional too. I have a strange problem and I'm afraid it's hardware. Currently I have Rack 1 (HDD 1) which no longer detects my disks, there are no more LEDs lighting up, we can clearly hear the HDD lighting up, but it is not visible in the CLI and not via the WebUI. Im using OMV 6.x with kernel 5.15.89 (current) stable RAID 6 with 5 x 8To Have try 2 new HDD in Rack 1 KO Have try to reinstall armbian & omv KO Have you ever had this problem ? Could it be a software problem? Screens : (Only 4 HDD missing /dev/sde) armbianmonitor : http://ix.io/4oQB dmesg : https://paste.yunohost.org/sefecotame.vbs lsblk : "/dev/sde" is missing sda 8:0 0 7.3T 0 disk └─md127 9:127 0 21.8T 0 raid6 sdb 8:16 0 7.3T 0 disk └─md127 9:127 0 21.8T 0 raid6 sdc 8:32 0 7.3T 0 disk └─md127 9:127 0 21.8T 0 raid6 sdd 8:48 0 7.3T 0 disk └─md127 9:127 0 21.8T 0 raid6 Best Regards
  7. Hello, I am quite new to this topic, and I have found it to be quite complex. I primarily work with tiny MCUs and RTOSes, but I am enjoying the Linux space so far. In essence I would like to use the Mali GPU that is embedded within the RK3399 of the Helios64 for hardware transcoding. This isn't a novel idea, as shown by these sources: https://forum.armbian.com/topic/9272-development-rk3399-media-script/ https://emby.media/community/index.php?/topic/66675-36078-transcoding-rockpro64/ I am using Jellying, and in so FFMPEG to decode/encode the data streams. It seems that V4L2 is supported for hardware ecoding/decoding in the FFMPEG package, but in my experience doesn't appropriately work with the Mesa Panfrost drivers (https://wiki.debian.org/PanfrostLima) and the ARM drivers fail to compile with the kernel headers provided by the armbian-config script. I like the idea of having hardware accelerated transcoding, and I'm not even interested in 4K content, but my helios64 fails to transcode h265 (HEVC) to h264 at a playable rate. Secondly I like to have watch-togethers with my friends and I have to use my power-hungry PC for this. Of course I can introduce new hardware to do this like an arm64 laptop, but I like the all-in-one solution, and I simply can't be the only one that feels this way. Has anyone had success with hardware acceleration? Any ROE or ongoing efforts? Thanks, Victor
  8. Hello, I have been holding onto kernel 5.10 (5.10.21-rockchip64 to be exact) for a while as it has been pretty stable for me and also because I heard bad feedback from newer kernels on the Helios 64. But with the recent discovery of the DirtyPipe exploit (which has been introduced in Linux 5.8), I might be looking to upgrade to the latest Armbian kernel (which contains a fix to the vulnerability). So, my question to fellow Helios64 users is: have you encountered any issue with kernel 5.15, especially when using OpenMediaVault 5, Docker, MergerFS and NFS?
  9. Getting free() invalid pointer issue when installing / using python3. Could be dpkg issue as well ! When I did sudo apt update && sudo apt upgrade, it happened and hence I tried to reinstall. This issue occurs when trying to manage it with ansible as well. I think something might be wrong with the latest python. Board: Helios64 (I know, I know CSC) Chipset: RK3399 Apt install logs: Reading package lists... Done Building dependency tree... Done Reading state information... Done 0 upgraded, 0 newly installed, 2 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/472 kB of archives. After this operation, 0 B of additional disk space will be used. (Reading database ... 44574 files and directories currently installed.) Preparing to unpack .../python3-pkg-resources_59.6.0-1.2ubuntu0.22.04.1_all.deb ... double free or corruption (out) Aborted dpkg: warning: old python3-pkg-resources package pre-removal script subprocess returned error exit status 134 dpkg: trying script from the new package instead ... dpkg: ... it looks like that went OK Unpacking python3-pkg-resources (59.6.0-1.2ubuntu0.22.04.1) over (59.6.0-1.2ubuntu0.22.04.1) ... Preparing to unpack .../python3-setuptools_59.6.0-1.2ubuntu0.22.04.1_all.deb ... Unpacking python3-setuptools (59.6.0-1.2ubuntu0.22.04.1) over (59.6.0-1.2ubuntu0.22.04.1) ... Setting up python3-pkg-resources (59.6.0-1.2ubuntu0.22.04.1) ... Setting up python3-setuptools (59.6.0-1.2ubuntu0.22.04.1) ... free(): invalid pointer Aborted dpkg: error processing package python3-setuptools (--configure): installed python3-setuptools package post-installation script subprocess returned error exit status 134 Errors were encountered while processing: python3-setuptools E: Sub-process /usr/bin/dpkg returned an error code (1) Ansible logs: ansible_facts: {} failed_modules: ansible.legacy.setup: ansible_facts: discovered_interpreter_python: /usr/bin/python3 failed: true module_stderr: |- free(): invalid pointer Aborted module_stdout: '' msg: |- MODULE FAILURE See stdout/stderr for the exact error rc: 134 msg: |- The following modules failed to execute: ansible.legacy.setup
  10. Hello, when i installed the helios64 initially i had the problem with the fans spinning at 100% (or really loud) and i solved it somehow. It was working fine with the fans spinning for one second when starting and shutting down the device. Now i finally updated the NAS with sudo apt update sudo apt upgrade sudo omv-release-upgrade sudo reboot and after the reboot the fans are now spinning really loud again all the time. Is anyone aware of an easy solution or a quick fix? Someone else had the same problem at https://forum.openmediavault.org/index.php?thread%2F42550-fans-go-on-full-and-stay-there-after-doing-an-update-in-omv-web-control-panel-he%2F= but it seems he either did not solve it or he just forgot to report back. I'm not sure if is related to my problem. alking about changing the fans but mentions the PWM Fan control again. On reddit someone mentions these settings again. I checked and my fancontrol settings seem to be identically or pretty close. In another reddit thread it was mentioned to "I went to armbian-config => System => CPU, set minimum and maximum CPU speed at 1200000 and set "governor" to "performance".". But its just "Fan too loud" and not "Fan spinning at 100%" So, i wanted to ask you guys if you know a fix for that problem 😕
  11. Fresh build edge bullseye image with kernel Linux helios 5.18.0-rockchip64 #trunk SMP PREEMPT Sun May 29 20:19:27 EEST 2022 aarch64 GNU/Linux fancontrol not work, because there no /dev/fan devices How to get fancontrol back to work?
  12. During installing Helios64 with the latest image I experienced several times that the system did not come up. Now that it is fully installed with OMV enabled, I sometimes do a reboot, and the system does not come up. Unfortunately I cannot see anything on the console level (on MacOS). After pressing the reboot button (around one or two times) the system comes up as nothing happened ever. What can I do to have the system reboot in a stable manner? Since it will sit in the basement soon, I don't want to run down every time I see that it does not reboot properly.
  13. Last night we lost power to the whole house, I had to manually start the NAS and when I tried to access my data there was nothing but empty shares. After checking OpenMediaVault Web UI and checking around, the file system status shows as "missing" and there is nothing under RAID Management and all the disks show properly under Storage > Disks. The NAS is set up as a Raid 6 on ext4 with five 12 TB drives. For starters, I wonder why the internal battery did not graciously shut down Helios64 as to avoid this from happening given that power outages are a rather common issue. Then, how do I get my file system back? After browsing some forums I noticed this info may seem important: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10.9T 0 disk sdb 8:16 0 10.9T 0 disk sdc 8:32 0 10.9T 0 disk sdd 8:48 0 10.9T 0 disk sde 8:64 0 10.9T 0 disk mmcblk1 179:0 0 14.6G 0 disk └─mmcblk1p1 179:1 0 14.4G 0 part / mmcblk1boot0 179:32 0 4M 1 disk mmcblk1boot1 179:64 0 4M 1 disk cat /etc/fstab UUID=dabb3dbf-8631-4051-a032-c0b97eb285bd / ext4 defaults,noatime,nodiratime,commit=600,errors=remount-ro 0 1 tmpfs /tmp tmpfs defaults,nosuid 0 0 # >>> [openmediavault] /dev/disk/by-label/data /srv/dev-disk-by-label-data ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2 # <<< [openmediavault] cat /etc/mdadm/mdadm.conf # This file is auto-generated by openmediavault (https://www.openmediavault.org) # WARNING: Do not edit this file, your changes will get lost. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts # definitions of existing MD arrays ARRAY /dev/md/helios64:0 metadata=1.2 name=helios64:0 UUID=f2188013:e3cdc6dd:c8a55f0d:d0e9c602 cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sde[4](S) sdc[2](S) sda[0](S) sdd[3](S) sdb[1](S) 58593766400 blocks super 1.2 unused devices: <none> fsck /dev/disk/by-label/ fsck from util-linux 2.33.1 e2fsck 1.44.5 (15-Dec-2018) fsck.ext2: No such file or directory while trying to open /dev/disk/by-label/ Possibly non-existent device? I don't want to lose data even if I have it backed up so I don't want to try to fix it blindly. And given that this could happen again to others and myself it would be very helpful to find a recovery method that others could use.
  14. Dear to all, I'm not expert. How can I add kernel header of zfs to linux kernel 5.15.80-rockchip64? I have run the command "apt install dkms zfs-dkms" and I've got the following output: Reading package lists... Done Building dependency tree... Done Reading state information... Done dkms is already the newest version (2.8.4-3). Suggested packages: debhelper Recommended packages: zfs-zed zfsutils-linux linux-libc-dev The following NEW packages will be installed: zfs-dkms 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 2297 kB of archives. After this operation, 18.0 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://httpredir.debian.org/debian bullseye-backports/contrib arm64 zfs-dk ms all 2.1.6-3~bpo11+1 [2297 kB] Fetched 2297 kB in 0s (5427 kB/s) Preconfiguring packages ... Selecting previously unselected package zfs-dkms. (Reading database ... 78864 files and directories currently installed.) Preparing to unpack .../zfs-dkms_2.1.6-3~bpo11+1_all.deb ... Unpacking zfs-dkms (2.1.6-3~bpo11+1) ... Setting up zfs-dkms (2.1.6-3~bpo11+1) ... Loading new zfs-2.1.6 DKMS files... Building for 5.15.80-rockchip64 Building initial module for 5.15.80-rockchip64 Done. zavl.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ znvpair.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zunicode.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zcommon.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zfs.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ icp.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zlua.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ spl.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ zzstd.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/5.15.80-rockchip64/updates/dkms/ depmod...... DKMS: install completed. Processing triggers for initramfs-tools (0.140) ... update-initramfs: Generating /boot/initrd.img-5.15.80-rockchip64 update-initramfs: Converting to u-boot format if I try to execute "apt install zfsutils-linux" I've got Reading package lists... Done Building dependency tree... Done Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: zfsutils-linux : Depends: libnvpair3linux (= 2.1.6-3~bpo11+1) but 2.1.2-1~20.04.york0 is to be installed Depends: libuutil3linux (= 2.1.6-3~bpo11+1) but 2.1.2-1~20.04.york0 is to be installed Depends: libzfs4linux (= 2.1.6-3~bpo11+1) but 2.1.2-1~20.04.york0 is to be installed Depends: libzpool5linux (= 2.1.6-3~bpo11+1) but 2.1.2-1~20.04.york0 is to be installed E: Unable to correct problems, you have held broken packages. Moreover here https://forum.openmediavault.org/index.php?thread/45376-zfs-version/ ryecoaaron has told me "you really need to make sure the kernel headers are installed first. Then install zfs-dkms. Then install the plugin." How can I install prerequisite so after I can install omv6 zfs plugin also? p.s.1: omv6 zfs-armhf should be installed only in 32 bit instruction set of arm? p.s.2: to mod: if section if thread is wrong please move to right section.
  15. Hello all, I get the message "qemu kvm_arm_vcpu_init failed: Invalid argument" which I refer to the source file code "https://git.qemu.org/?p=qemu.git;a=blob;f=cpus.c;hb=refs/ heads / stable-2.11 # l1122" and "https://github.com/qemu/qemu/blob/master/accel/kvm/kvm-all.c " Row 444. I don't know which vCPU feature is expected ... #cat /proc/cpuinfo processor : 0 BogoMIPS : 48.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4 .... is it currently or later even possible to use virtualization? Regards
  16. Hi there ... Not sure if anyone has an idea here, but I have the following topic: I have a helios64, equipped with 3 2.5" disk drives, running Debian 10 with Kernel 5.10.43. Usually the system runs stable ... even for days (24/7). But every now and then the disks "fail" - which means, during normal operation suddenly the disk turn of, linux reports "sata link down" ... and the disks are off. As if they were removed. If I reboot I usually here only some "klicks" ... I assume the disks try to spin up, but fail due to not enough power (?). To recover from that situation, I usually disconnect power (incl. the internal UPS) ... try a few times, and sooner or later disks power successfully up again. Then the system run's again for a while without any issues. I already tried to re-cable everything, trying to ensure that the cables are properly connected, but this does not seem to help. I assume that there is an hardware issue with the SATA power rails (both show the same effect). For that reason I'm looking for the schematics, to trace that a little deeper ... anyone an idea if they are somewhere available? Thanks!
  17. Dear all; I'm looking for some help with my Helios64 NAS, since I update kernel to 5.15, zfs-dkms won't work. Which is normal, since the most recent version available for Helios64 of zfs-dkms is 2.0.3-1~bpo10+1 (tested on 1st April 2022), and this version is only compatible with kernel from 3.10 to 5.10. Then the first solution that I thought was to downgrade to 5.10 through the armbian-config tool, however when I'm trying to install the linux-headears (through armbian-config) I'm having the herdears of 5.15 (same issue with apt install linux-headers-current-rockchip64) while zfs-dkms looking for 5.10. Preparing to unpack .../zfs-dkms_2.0.3-1~bpo10+1_all.deb ... Unpacking zfs-dkms (2.0.3-1~bpo10+1) ... Setting up zfs-dkms (2.0.3-1~bpo10+1) ... Loading new zfs-2.0.3 DKMS files... It is likely that 5.10.63-rockchip64 belongs to a chroot's host Building for 5.15.25-rockchip64 Building initial module for 5.15.25-rockchip64 configure: error: *** None of the expected "capability" interfaces were detected. *** This may be because your kernel version is newer than what is *** supported, or you are using a patched custom kernel with *** incompatible modifications. *** *** ZFS Version: zfs-2.0.3-1~bpo10+1 *** Compatible Kernels: 3.10 - 5.10 Error! Bad return status for module build on kernel: 5.15.25-rockchip64 (aarch64) Consult /var/lib/dkms/zfs/2.0.3/build/make.log for more information. /var/lib/dkms/zfs/2.0.3/build/make.log: DKMS make.log for zfs-2.0.3 for kernel 5.15.25-rockchip64 (aarch64) Fri 01 Apr 2022 05:38:22 PM UTC make: *** No targets specified and no makefile found. Stop. I try then to perform a fresh installation of the system by downloading Armbian_21.08.2_Helios64_buster_current_5.10.63, and then it goes worst since I can't download the header through armbian-config (nothing happen), and apt install linux-headers-current-rockchip64 keep installing the sources for 5.15 ('/usr/src/linux-headers-5.15.25-rockchip64'). So the problem how I see it is either to: * Get the linux-headers of the previous kernel. * obtaining a zfs-dkms version compatible with 5.15 Thank you in advance for your help. PS: I am aware of the docker alternative, but I prefer to use zfs-dkms.
  18. As stated in my comments here (but it seems no one is reading this blog post comments anymore ...) : https://blog.kobol.io/2020/10/27/helios64-software-issue/ - With 5.8.17 i have kernel panics. - With 5.8.14 i have freeze. So what to do in order to have a stable situation ?! It seems to be "under load" (one freeze during RAID array being built, and several ones (with .14 or .17 kernels) while files were being copied through the 1Gb/s network interface). - System newly installed and running on fresh SSD card. - 5*3.5" WD HDD plugged in / RAID5 mdadm array (so no M.2, ...) - nothing else done on the NAS (nothing has been installed on the OS, no processes in memory outside raid + rsync or scp file copying through SSH)
  19. I accidentally ripped off a capacitor from the back of the Helios64. This missing capacitor is right below the SATA connector. I searched for the schematic of the board without any success. Does anyone happen to know the value and size of this capacitor?
  20. I'm sick of having to play reset-it-until-it-boots-then-it-reboots-later game, anyone know a similar motherboard I can use in place of the helios64 with minimal extra messing around? I totally understand that the front and back panel will be a problem to set up, I just don't want to have to hack the main case part, and have support for the five SATA ports.
  21. I'm trying to understand the role of zram and having some difficulties with it. I've looked in several places and my understanding is this: - It's a block device in RAM that uses compression - It functions similar to swap I notice that mine is set to about 2 Gb: wurmfood@helios64:~$ cat /proc/swaps Filename Type Size Used Priority /dev/zram0 partition 1945084 0 5 My questions: 1. Does that mean 2 Gb of the device RAM is used for the zram swap device? 2. I can understand this on devices with slower storage, but would it be reasonable to disable this and instead use swap on an SSD?
  22. Hello, I have recently bought a Helios64 item. I have mounted it, I have done all steps written on wiki here https://wiki.kobol.io/helios64/kit/ and here https://wiki.kobol.io/helios64/install/sdcard/ but when I try to follow this steps https://wiki.kobol.io/helios64/install/first-start/ USB to serial bridge is not detected by my notebooks (I have tried with notebook 1 (windows10 x64 with drivers downloaded and installed with exe setup) and notebook 2 (x86 cpu with last kernel ubuntu and windows xp)). I have unmounted the backplane cover, jumper P10, P11 and P13 (see here jumper description https://wiki.kobol.io/helios64/jumper/ ) are all opened so my question is: what is the problem? p.s. 1: When I have followed this step https://wiki.kobol.io/helios64/install/sdcard/ when i plug the power connector automatically system has booted without that I press power bottom (in the link at step 4 is written "Don't power-up the Power Adapter yet") ... something has been happened that cause the not detection on my notebooks? p.s. 2: the USB to serial bridge can be detected from notebooks with system that is turn off with only plug power connector plugged (with only LED1 lighted https://wiki.kobol.io/helios64/led/)? I make this question because here https://www.youtube.com/watch?v=58coL23Bzzw the dude first turn on helios 2:05, after when is linked with usb and execute picocom command helios is turn off 2:08 and finally he turn on Helios device 2:10 p.s. 3: when I realize that the USB to serial bridge isn't detected by my notebook, I turn off the device from power button. Sometimes is necessary only press button for 1 second and device is turn off, other times I press button for 5 seconds... but in the last case I hear a whistle... something can be damaged in the second scenario? p.s. 4: if I must buy jumper shorting, the spec of 2.54 mm is right? Hoping that somebody help me Best regards
  23. Hello, so we use Kobol 64 with OMV since 4 months, and no problem at all. Just yesterday, want to change Ethernet cable, correctly shutdown the Helios64 NAS, and impossible to start it again. Just have the right system light, but left LED no blinking, no information in the USBC terminal. For the "recovery", press the recovery button, and then press the power button, and the LED System don't light up. What can we do, format the system, and reinstall OMV with all parameters... Not really serious, ins't it ? Thank you for any help. Regard.
  24. Feels like this should be on here so everyone knows https://blog.kobol.io/2021/08/25/we-are-pulling-the-plug/
  25. I've been having a recent issue over the last couple months where my ArmbianEvn.txt file is being overwritten thus causing Helios to hang in the boot process. Using some samples i've found on these forums, I was able to re-create my file and get the Helios back up and running and have since stored a copy so that when it happens, I can just copy and paste and back up running in no time. To date, i've had this happen to me 4 times. I'm not worried about having go through this at this point in time, but I was curious if anyone else has seen this issue or not? I've saved a couple samples of what was written into the ArmbianEnv.txt file: /var/log/armbian-hardware-monitor.log { rotate 12 weekly compress missi /var/log/alternatives.log { monthly rotate 12 compress delaycompress missingok notifempty create 644 root root } I'm running Buster 5.10.21. I have 5-14TB EXOS drives in Raid 6 and have OMV as the only thing installed.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines