Jump to content

Search the Community

Showing results for tags 'helios64'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Community
    • Announcements
    • Feature Requests
  • Using Armbian
    • Beginners
    • Advanced users - Development
  • Upcoming Hardware (WIP)
    • News
    • Khadas VIM4
    • Radxa Zero 2
    • Odroid M1
    • NanoPi R5S
  • Maintained Hardware
    • Board does not start
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families
  • Unmaintained (CSC/EOL/TVB) / Other
    • TV boxes
    • Off-topic
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families
  • TV Boxes's General Chat
  • TV Boxes's Reviews/Tutorials
  • TV Boxes's FAQ
  • TV Boxes's TV Boxes running Armbian
  • TV Boxes's Rockchip CPU Boxes
  • TV Boxes's Amlogic CPU Boxes
  • TV Boxes's Allwinner CPU Boxes
  • Android's Forums
  • Gaming on ARM's Reviews
  • Gaming on ARM's Issues

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Matrix


Mastodon


IRC


Website URL


XMPP


Skype


Github


Discord


Location


Interests

  1. That newsletter indicated that USB-C to HDMI cables promised were still delayed. Is there an update on that? Thanks!
  2. Hello together, i experienced network problems with Helios64 and latest Debian Buster Image. I cannot setup a static ip address after installation of OMV5, moreover it crashes on install. I double checked it on a fresh image without any changes and an immediate installation of OMV over softy. The result is the same. After the crash OMV is in fact available, but the network configuration is broken and a static setup doesnt work over the OMV UI, nor armbian-config, also the boot takes about 2-5 minutes due to whatever reason. armbianmonitor http://ix.io/2PPJ
  3. I have installed Armbian 21.02 Ubuntu 20.04 Focal with Linux kernel 5.10.16 and OpenZFS 2.0.1-1 on my Helios64. Created a three-disk encrypted RAIDZ pool with the following command: sudo zpool create -f -o ashift=12 -O compression=lz4 -O atime=off -O acltype=posixacl -O xattr=sa -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase tank raidz1 /dev/disk/by-id/foobar1 /dev/disk/by-id/foobar2 /dev/disk/by-id/foobar3 Then I did a single sequential read benchmark with fio on /tank: cd /tank sudo fio --name=single-sequential-read --ioengine=posixaio --rw=read --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1 This results in a speed of 62 MB/s, much lower than I would expect. I created exactly the same pool with three disks of the exact model (same batch) on Ubuntu 20.04 with Linux kernel 5.4.0 and OpenZFS 0.8.3 on a HPE ProLiant MicroServer Gen10, and the performance of the same benchmark is 232 MB/s. Does anyone have the same experience of this low encrypted ZFS performance on the Helios64? Is this because of the encryption? Looking at the output of top during the benchmark, the CPU seems to be taxed much, while on the HPE machine CPU usage is much less.
  4. Hello, my nas is too noisy, do you know if it's possibile to change the fans with others that is silent? Regards.
  5. Last night we lost power to the whole house, I had to manually start the NAS and when I tried to access my data there was nothing but empty shares. After checking OpenMediaVault Web UI and checking around, the file system status shows as "missing" and there is nothing under RAID Management and all the disks show properly under Storage > Disks. The NAS is set up as a Raid 6 on ext4 with five 12 TB drives. For starters, I wonder why the internal battery did not graciously shut down Helios64 as to avoid this from happening given that power outages are a rather common issue. Then, how do I get my file system back? After browsing some forums I noticed this info may seem important: lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10.9T 0 disk sdb 8:16 0 10.9T 0 disk sdc 8:32 0 10.9T 0 disk sdd 8:48 0 10.9T 0 disk sde 8:64 0 10.9T 0 disk mmcblk1 179:0 0 14.6G 0 disk └─mmcblk1p1 179:1 0 14.4G 0 part / mmcblk1boot0 179:32 0 4M 1 disk mmcblk1boot1 179:64 0 4M 1 disk cat /etc/fstab UUID=dabb3dbf-8631-4051-a032-c0b97eb285bd / ext4 defaults,noatime,nodiratime,commit=600,errors=remount-ro 0 1 tmpfs /tmp tmpfs defaults,nosuid 0 0 # >>> [openmediavault] /dev/disk/by-label/data /srv/dev-disk-by-label-data ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl0 2 # <<< [openmediavault] cat /etc/mdadm/mdadm.conf # This file is auto-generated by openmediavault (https://www.openmediavault.org) # WARNING: Do not edit this file, your changes will get lost. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is # used if no RAID devices are configured. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts # definitions of existing MD arrays ARRAY /dev/md/helios64:0 metadata=1.2 name=helios64:0 UUID=f2188013:e3cdc6dd:c8a55f0d:d0e9c602 cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sde[4](S) sdc[2](S) sda[0](S) sdd[3](S) sdb[1](S) 58593766400 blocks super 1.2 unused devices: <none> fsck /dev/disk/by-label/ fsck from util-linux 2.33.1 e2fsck 1.44.5 (15-Dec-2018) fsck.ext2: No such file or directory while trying to open /dev/disk/by-label/ Possibly non-existent device? I don't want to lose data even if I have it backed up so I don't want to try to fix it blindly. And given that this could happen again to others and myself it would be very helpful to find a recovery method that others could use.
  6. Hi, everyone. The Kobo site is kind of difficult to navigate if you're looking for relevant information, but I came to the conclusion that this is the right place for support. Forgive me if I'm mistaken. I was wondering if someone could help me interpret these bootlogs from serial term when trying to boot my Helios64 from the EMMC. Following the instructions on Kobol's site, I flashed their u-boot image to an sd card, booted the Helios64 off of said sd, fetched the latest image of Armbian, unpacked that image, and flashed it from my laptop to the Helios64 via usbc to what I presume is the correct interface. Could it be that I messed up? Here's a quick example of the weird behaviour when I take the sd card out and turn on the machine: Basicall, it keeps repeating this dialogue but with different files. So, for the one above, it says "Retrieving file: pxelinux.cfg/01-64-62-66-d0-03-42", but then it'll go on and say "Retrieving file: pxelinux.cfg/C0A8018", and so on. It'll keep doing that for a bunch of files, after which there's no boot screen or login prompt or anything. I haven't managed to ssh into it, either, but it may be my mistake that I can't figure out how to do so. If I unplug the ethernet, it looks like this: I'm really dreading the idea that I have to take apart the chassis and jump the board so that I can boot from the sd card. I don't understand why the sd card wouldn't be before the emmc in Helios64's boot order. That seems absurd to me. Anyway, here's at least part of the log. I didn't feel like sitting around to let it go through all the files:
  7. grek

    ZFS on Helios64

    Hey, I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS I wrote few scripts maybe someone of You it can help in someway. Thanks @jbergler and @ShadowDance , I used your idea for complete it. The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo for example: root@helios64:~# zfs --version zfs-0.8.4-2~bpo10+1 zfs-kmod-2.0.0-rc6 root@helios64:~# uname -a Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux root@helios64:~# I tested it with kernel 5.9.10 and with today clean install with 5.9.11 First we need to have docker installed ( armbian-config -> software -> softy -> docker ) Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) mkdir zfs-builder cd zfs-builder vi Dockerfile FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10 Build docker image for building purposes. docker build --tag zfs-build-ubuntu-bionic:0.1 . Create script for ZFS build vi build-zfs.sh #!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo "" chmod +x build-zfs.sh screen -L -Logfile buildlog.txt ./build-zfs-final.sh
  8. I have run into this issue multiple times and the process flow is this: Flash image "Armbian_20.11.6_Helios64_buster_current_5.9.14.img" to a micro SD card. Run through the config (installing OMV with SAMBA) Kick over to the OMV to configure the share / media so its available on my network Runs for some time (this last install was running for maybe a few weeks before it triggered a Kernel panic) Power off. Power on. Board boots, hangs on "Starting Kernel..." Is there some trick I am missing to get this thing to actually boot / reboot correctly? Troubleshooting info: No jumpers are shorted. I had a similar issue with a previous img as well.
  9. Hello everybody, im experiencing a similar problem. Sometimes my WD RED 12 TB HDD is not recognized in that slot its in, when i change it to any other it works again. Nevertheless it seems this problem doesnt only apply to one specific slot, but is changing from time to time. EDIT: The LED of that slot doesnt turn on, yet the drive starts and runs Do anybody experience the same problem, or know any help? Thank you in advance. Best Regards.
  10. Had anyone tested DAS mode performance with Capture One? Or could someone explain to me how it works? Also, would C1 show the DAS mode and NAS mode as seperate volumes or is it smart enough to show up as the same one? Background: Capture One doesn't work well with network drives. Editing images, moving raw files, even saving catalog backups is dramatically slower on a NAS than a DAS or internal drive. It doesn't matter how fast the drive or network is. Its a long standing bug they're always quick to say they aren't prioritizing fixing. I've looked at other NAS's that do a DAS mode by running a 10GBE connection over thunderbolt and that's a great solution in every situation except for working with C1. Has anyone tried this?
  11. Hello, I've bought 3 new identical drives and put 2 of them in the Helios64 and 1 onto my desktop. I ran smartctl longtest+shortest on both drives simultanously but all of them were aborted/interupted by host. `# /usr/sbin/smartctl -a /dev/sda` Output: http://ix.io/2OLt On my desktop the longtest from smartctl succeeds without error and all 3 drives received the same care, I just bough them and installed them, so unlikely the drives are physically damaged. The complete diagnostic log: http://ix.io/2OBr So anyone got an idea why are my SMART extended tests being canceled? Note: I've tried even the trick with running background task to prevent drives from some vendor sleep `while true; do dd if=/dev/sda of=/dev/null count=1; sleep 60; done`
  12. Hi everyone, I have installed openmediavault and zfs. Since yesterday I have this error: Failed to execute command 'export PATH = / bin: / sbin: / usr / bin: / usr / sbin: / usr / local / bin: / usr / local / sbin; export LANG = C.UTF-8; zfs list -H -t snapshot -o name, used, refer 2> & 1 'with exit code' 1 ': The ZFS modules are not loaded. Try running '/ sbin / modprobe zfs' as root to load them. Click "show details" Error # 0: OMV \ ExecException: Failed to execute command 'export PATH = / bin: / sbin: / usr / bin: / usr / sbin: / usr / local / bin: / usr / local / sbin; export LANG = C.UTF-8; zfs list -H -t snapshot -o name, used, refer 2> & 1 'with exit code' 1 ': The ZFS modules are not loaded. Try running '/ sbin / modprobe zfs' as root to load them. in /usr/share/php/openmediavault/system/process.inc:182 Stack trace: # 0 /usr/share/omvzfs/Utils.php(438): OMV \ System \ Process-> execute (Array, 1) # 1 /usr/share/omvzfs/Utils.php(394): OMVModuleZFSUtil :: exec ('zfs list -H -t ...', Array, 1) # 2 /usr/share/openmediavault/engined/rpc/zfs.inc(194): OMVModuleZFSUtil :: getAllSnapshots () # 3 [internal function]: OMVRpcServiceZFS-> getAllSnapshots (Array, Array) # 4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array (Array, Array) # 5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV \ Rpc \ ServiceAbstract-> callMethod ('getAllSnapshots', Array, Array) # 6 / usr / sbin / omv-engined (537): OMV \ Rpc \ Rpc :: call ('ZFS', 'getAllSnapshots', Array, Array, 1) # 7 {main} Also I can no longer access the folders on the disks. Can someone help me?
  13. Hello, I installed OMV on my Helios64 and created a share. The device stays online and works well for hours and then at some point eth0 goes offline and no longer communicates. The only way to restore connectivity is with a reboot. I looked at the logs and dont see anything jumping out at me as to why this happens but I could very easily be missing something or the log didnt capture the event. I have not yet tried to console in through the USBc port when this occurs but will try next time. Has anyone else seen this issue or fixed it?
  14. CURRENT branch (Linux Kernel 5.10) Feature Support Status Remarks Shutdown OK Reboot OK Suspend to RAM Not Supported USB host controller refuses to enter suspend mode 2.5G Ethernet (2.5G speed) OK TX offload disabled by default, can be enabled for better performance 2.5G Ethernet (1G speed) Performance issue Requires hardware fix. Main Power/UPS status OK Status can be read from sysfs Battery Charging OK Status can be read from sysfs UPS configuration OK Auto shutdown after 10 min of power loss. USB Type C - Host OK use dwc3-0-host overlay. Enable P13 jumper to be able to use USB 2.0 device. USB Type C - Gadget OK use overlay to enable USB device mode. refer to this link USB Type C - DisplayPort OK eMMC Boot OK SPI Boot Not Supported Recovery Button OK To use maskrom, jumper P13 must be enabled. USB LED Not Supported usbport led trigger does not support activity and only works with USB 2.0 device LAN LED OK default to eth0 activity Wake On LAN Not Supported Suspend still have issue Armbian 21.02
  15. Hi guys, My Helios64 keeps going offline, it has this red light come on the front panel but i am not sure what it means exactly. This happens once every few weeks, the system is fine after a reboot. What does this red light mean exactly? Is there anything i can do to prevent these breakdowns? Cheers,
  16. Had my Helios up and running for a few months now running off SD card, but there was a message (can't remember where) to not make the install you 'final' setup as things were still in development. I've been keeping track of the 'what works and doesn't' , but as there are no updates in 6 weeks, is the general consensus that we've reached stability and its time to transfer to the eMMC?
  17. Hi. My helios64 always crashes with this method : [environment] 1TB m.2 sata ssd for system 16TB seagate exos on data drive 18TB WD Ultrastar DC550 on data drive Linux helios64 5.9.14-rockchip64 #20.11.4 SMP PREEMPT Tue Dec 15 08:52:20 CET 2020 aarch64 aarch64 aarch64 GNU/Linux about 620,000 files on data drive. [method1] #sync #sysctl -w vm.drop.caches = 3 #ls -CFR [datadir] > [tmp dir] #ls -1lFR [datadir] > [tmp dir] #zip -j [tmpdir] filelist.zip #sync >>>>>>> Crash (blinking Red LED) [method2] (First step) via samba , on Windows , see property of all 620,000 files. (Second step) on SSH "sysctl -w vm.drop_caches = 3" >>>>>> Crash(blinking Red LED)
  18. Has anybody got DAS mode working using USB-C ? Using: Armbian 20.08.21 Buster with Linux 5.8.17-rockchip64 I was not certain, I thought I read it was supported. I was hoping to use Windows 10 via USB 3.1 --> Helios64 USB-C to copy large video library. Note; using UART via COM3 is working as expected. Note; in windows I do remember seeing some USB error loading some USB driver; would it be related to some missing driver?int Any successful implementation of USB DAS interface; I would be happy to hear from you.
  19. Hello everybody, Is it possible to use the box with another card (odroid n2 for example)? Thanks a lot
  20. Hello all, I get the message "qemu kvm_arm_vcpu_init failed: Invalid argument" which I refer to the source file code "https://git.qemu.org/?p=qemu.git;a=blob;f=cpus.c;hb=refs/ heads / stable-2.11 # l1122" and "https://github.com/qemu/qemu/blob/master/accel/kvm/kvm-all.c " Row 444. I don't know which vCPU feature is expected ... #cat /proc/cpuinfo processor : 0 BogoMIPS : 48.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid CPU implementer : 0x41 CPU architecture: 8 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4 .... is it currently or later even possible to use virtualization? Regards
  21. Trying to understand this issue. If I ONLY need the 1Gbs (LAN1) ethernet connection and DO NOT need the 2.5Gbs (LAN2) connection, do I need to perform the "Fix" for this issue? I'll be using Armbian Buster for this device.
  22. I literally registered to post this complaint here. How STUPID it is to have a M.2 SATA SHARED WITH HDD1!! IT DEFEATS THE WHOLE PURPOSE OF THIS ENCLOSURE. PERIOD!!!! My mistake was that I did not know what it meant that M.2 was SHARED before I purchased this enclosure and now, I have a need to expand beyond the measly 16GB of your eMMC and I found out THE PAINFUL WAY that IT IS NOT POSSIBLE because of the shared SATA port. IT"S BEYOND FRUSTRATION. YOU CANNOT EVEN IMAGINE HOW STUPID THIS DECISION OF YOURS IS. IT JUST LITERALLY DOWNGRADED YOUR NAS FROM AN EXCELLENT ENCLOSURE TO AN OVERPRICED JUNK THAT I HAD TO WAIT FOR FOR MORE THAN 6 MONTHS. AND THERE IS NO WAY TO RETURN IT TO YOU AND GET MY MONEY BACK. What should I do with my RAID now, once I want to install an SSD drive? Should I now use RAID drive as a slow storage and have symbolic links to make sure that I don't run out of space on eMMC? IF THERE WAS SOME WAY TO RATE YOUR ENCLOSURE SO NOBODY ELSE MAKES THE SAME MISTAKE I DID, I WOULD CERTAINLY DO THAT.
  23. Anyone know a solution for this? Linux vlib2 5.9.14-rockchip64 #20.11.4 SMP PREEMPT Tue Dec 15 08:52:20 CET 2020 aarch64 aarch64 aarch64 GNU/Linux Jan 08 14:27:05 vlib2 systemd[2481]: Starting Timed resync... Jan 08 14:27:05 vlib2 profile-sync-daemon[11324]: WARNING: Do not call /usr/bin/profile-sync-daemon as root. Jan 08 14:27:05 vlib2 systemd[2481]: psd-resync.service: Main process exited, code=exited, status=1/FAILURE Jan 08 14:27:05 vlib2 systemd[2481]: psd-resync.service: Failed with result 'exit-code'. Jan 08 14:27:05 vlib2 systemd[2481]: Failed to start Timed resync.
  24. sdelhaye

    Selling Helios64

    Hello everyone, If anyone would like to have a helios64 without having to wait for the next batch, mine is for sale. This is the complete bundle, it's in very good condition. I will post pictures very soon. I live in France, it would be better if you are in Europe so that the shipping costs remain reasonable...
  25. Hello Guys, I would like to have question maybe unwise but it's hard to find information about it. I would like to prevent somehow my Helios64 or it's power supply to minimize risk of fire. Do you have any propositions to minimize that risk? Thank you for your help in advance
×
×
  • Create New...