Jump to content

Search the Community

Showing results for tags 'helios64'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Armbian
    • Armbian project administration
  • Community
    • Announcements
    • SBC News
    • Framework and userspace feature requests
    • Off-topic
  • Using Armbian
    • Beginners
    • Software, Applications, Userspace
    • Advanced users - Development
  • Standard support
    • Amlogic meson
    • Allwinner sunxi
    • Rockchip
    • Other families
  • Community maintained / Staging
    • TV boxes
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families
  • Support

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Matrix


Mastodon


IRC


Website URL


XMPP/Jabber


Skype


Github


Discord


Location


Interests

  1. I'm having a very weird issue. I've encrypted a 5TB USB drive on my computer with LUKS sudo cryptsetup luksFormat /dev/sda1 And it works great on my computer. But when I plug in into my Helios64 and try to use it: sudo cryptsetup luksOpen /dev/sda1 secret Enter passphrase for /dev/sda1: No key available with this passphrase. It just refuses. And if I try to encrypt the drive from the Helios64 using cryptsetup it never finishes, the command runs forever. Anyone had similar issues? Any ideas?
  2. Hello, I've got my Helios64 since a few week now and enjoying it a lot. I've installed k3s on it with various applications. The next step is to install home assistant to control my home. Previously, I've already had a setup with a RaspberryPi with a Arduino Mega connected, that is running the rflink firmware. That way I, can control cheap 433Mhz power switches. The problem is, when I connect the Arduino (acting as a serial input device) to my Helios64, the system does not startup anymore. The following error message is printed on the serial console: WARN halted endpoint, queueing URB anyway. Unexpected XHCI event TRB, skipping... (f5f5deb0 00000000 13000000 04008400) "Synchronous Abort" handler, esr 0x96000010 elr: 000000000025ee28 lr : 000000000025ee28 (reloc) elr: 00000000f7f6fe28 lr : 00000000f7f6fe28 x0 : 0000000000000000 x1 : 00000000000003e8 x2 : 0000000000000040 x3 : 000000000000003f x4 : 00000000f5f5e500 x5 : 0000000000001800 x6 : 0000000000000030 x7 : 000000000000000f x8 : 00000000f5ef09c8 x9 : 0000000000000008 x10: 00000000ffffffe8 x11: 0000000000000010 x12: 00000000000000fc x13: 0000000000000002 x14: 00000000f5ef11e0 x15: 0000000000000021 x16: 00000000f7f6db08 x17: 0000000000000001 x18: 00000000f5f08dc0 x19: 00000000f5f5b2c0 x20: 00000000f5ef10c0 x21: 0000000000000000 x22: 00000000f5f6a1d0 x23: 00000000000000ff x24: 0000000080000580 x25: 00000000f5ef0d40 x26: 0000000000000004 x27: 0000000000000001 x28: 00000000f5f6a1d0 x29: 00000000f5ef0a40 Code: 97ffff40 52800401 aa1303e0 97ffffa2 (b9400c00) After this message is printed, the CPU is reset and the system tries to startup again. Does anyone has an idea, what the problem could be? Does the system try to boot from the serial device? I recently upgraded to Kernel 5.10.43, which did not solve the problem. Many thanks for your help in advance Julius
  3. I've been hopelessly trying to get Armbian(Buster) and OMV to work together properly. But alas no luck. I decided to start over using a fresh and latest image on the SD card. I believe I may have unwittingly screwed up the eMMC boot process because I added an M.2 SATA drive in yesterday and have since removed it. Hence my quest to simply get the SBC up and running from the SD card. I have tried everything this morning, I cannot get an image to load at all. One of my questions is; do the hard drives need to be installed just to get the SBC working? I have tried with them in and out to no avail. Are there some lingering config files residing on the HDs that are necessary for the SBC to boot up properly. I hope not! I want to do all of the HD configuring and setup from within OMV once I get that reinstalled. But, I can't even get the SBC to come alive. Is there switch on the Helios64 mobo that will reset it to factory defaults? I'm at at a loss at the moment, as I can't even Putty into the board, and nothing comes up on my screen (USB-C). Any help and advice would be appreciated. Rick
  4. After upgrade to Armbian 20.11.3 Buster with Linux 5.9.14-rockchip64 eth1 has vanished. ❯ sudo ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 64:62:66:d0:06:18 brd ff:ff:ff:ff:ff:ff Any ideas? http://ix.io/2I2U
  5. Hey, posting this in hopes that someone might have an ideas as to why this is happening. I've been dealing with an issue with my ZFS pool for a while now where the pool gets suspended but there are no other error indicators. WARNING: Pool 'rpool' has encountered an uncorrectable I/O failure and has been suspended. I'm using ZFS on top of LUKS and used to have problems with my drives resetting due to SATA errors, but I haven't really seen that issue since I started using my own SATA cables and limiting SATA speed to 3 Gbps. My working theory for last week has been that it's a problem with the CPU, perhaps some handover between big.LITTLE. So I've tried changing ZFS module options to pin workers to CPU cores, and I've also tried dm-crypt options that do this, but nothing has helped. So either the theory was wrong, or the tweaks did not change the faulty behavior. I also tried disabling the little cores, but the machine refused to boot as a result. With anywhere from two pool stalls per day to one per week, I'm pretty much at my wits end and ready to throw in the towel with my Helios 64. In addition, I still have random kernel stalls/panics originating from rcu or null pointer dereferences (on boot, usually). I'm not really interested in learning to debug the Linux kernel so I might just throw money at the problem and retire the Helios unless someone has a solution for this. I do love the idea of open source hardware and wish the best success for Kobol and the Helios, but I wasn't quite ready to commit to these many problems. I've also tried to set the pool failure mode to panic (zpool set failmode=panic rpool) but it provides no useful output as far as I can tell: [22978.488772] Kernel panic - not syncing: Pool 'rpool' has encountered an uncorrectable I/O failure and the failure mode property for this pool is set to panic. [22978.490035] CPU: 1 PID: 1429 Comm: z_null_int Tainted: P OE 5.9.14-rockchip64 #20.11.4 [22978.490833] Hardware name: Helios64 (DT) [22978.491182] Call trace: [22978.491416] dump_backtrace+0x0/0x200 [22978.491743] show_stack+0x18/0x28 [22978.492041] dump_stack+0xc0/0x11c [22978.492346] panic+0x164/0x364 [22978.492962] zio_suspend+0x148/0x150 [zfs] [22978.493678] zio_done+0xbd0/0xec0 [zfs] [22978.494387] zio_execute+0xac/0x150 [zfs] [22978.494783] taskq_thread+0x278/0x460 [spl] [22978.495161] kthread+0x140/0x150 [22978.495453] ret_from_fork+0x10/0x34 [22978.495778] SMP: stopping secondary CPUs [22978.496134] Kernel Offset: disabled [22978.496443] CPU features: 0x0240022,2000200c [22978.496817] Memory Limit: none [22978.497098] ---[ end Kernel panic - not syncing: Pool 'rpool' has encountered an uncorrectable I/O failure and the failure mode property for this pool is set to panic. ]--- It's not necessarily the Helios's fault either, this could very well be a bug in ZFS on ARM for all I know.
  6. If anyone is interested, I've created a shell script that can be run with cron to check A/C and battery status, and do a graceful shutdown when a battery threshold is met. It also logs power outages, battery drain when off A/C, and when it posts a shutdown (if any). This is a simple script, nothing fancy, but works for me. It has a granularity of one minute (cron is limited to one minute intervals), but I've been able to keep the system up on heavy load for 4-5 minutes before the script generates a shutdown on low battery. The script is generous -- it waits for 916mV or less before a shut-down. This value was provided by the KOBOL team on their Wiki as the recommended shutdown threshold for the battery, but can be changed via the script. You can comment out any file-logging lines, or change the file location, also from within the script. I've heavily commented it, should be pretty easy to follow. Copy the script below into a file named whatever you choose, place it where you want to run it, and cron it to get it going. It should only create disk writes when the power is out or when shutting down from battery drain on no A/C (a few lines once per minute), and will create no disk writes if you comment out the logging. Let me know how it works for you, if you use it. I've been testing it for the past couple of days, and it's worked as intended. I'm always open to code tips, and I'm still learning, so any advice is appreciated.
  7. My Helios64 setup has been running stable for months now (except for one MOSFET that failed and had to be replaced), however after recent updates using the OMV5 package manager I am unable to transfer files to my computer (SMB shares). When trying to transfer to a Windows computer the transfer speed immediately plummets to 0, stays there forever or File Explorer crashes/stops responding. When transferring to an Ubuntu system the transfer time outs. I can browse the files on the SMB share, I can play them (movies, music, etc) or run them directly with no issues. I can transfer files using scp with no problem (just at much reduced speed/bandwidth). Version: 5.6.7-1 Kernel: 5.10.35-rockchip64 I was hoping for new updates to fix whatever is broken, but new updates won't install because of "unmet dependencies" and "you have held broken packages". So far this is for armbian-bsp-cli-helios64 21.05.2 and linux-buster-root-current-helios64 21.05.1 Anyone know what's wrong?
  8. I've been having HDD/SATA issues with my zfs pool, as have many others. Now my HDDs will not spin up or be recognized when on slot 1 or 2. What's wrong, how to check and what to check? Any help appreciated.
  9. My Helios has been running well for the most part stable for the past few weeks however last night i noticed when looking at my file systems in omv that my 14 GiB eMMC drive which houses my ambian install is at 13.13GiB and looks to be growing. Normally this sits around the 5 GiB. I have checked my docker install for anything suspicious but my plex docker is only taking up about 400mb. looking at my system logs i am getting a repeated this error which is new and might be related however i am not really sure what program is casing it. Jun 10 17:32:27 localhost rsyslogd: [origin software="rsyslogd" swVersion="8.1901.0" x-pid="1322" x-info="https://www.rsyslog.com"] rsyslogd was HUPed Jun 10 17:32:28 localhost smbd[2572]: [2021/06/10 17:32:28.177454, 0] ../source3/param/loadparm.c:3362(process_usershare_file) Jun 10 17:32:28 localhost smbd[2572]: process_usershare_file: stat of /var/lib/samba/usershares/downloads failed. No such file or directory Jun 10 17:32:30 localhost smbd[2572]: [2021/06/10 17:32:30.195229, 0] ../source3/param/loadparm.c:3362(process_usershare_file) Jun 10 17:32:30 localhost smbd[2572]: process_usershare_file: stat of /var/lib/samba/usershares/downloads failed. No such file or directory I would really appreciate any help on how i can find out what is populating my drive so i can correct. Thank you.
  10. Hi, I'm very pleased with my Helios 64, but since I updated to Armbian 20.11.6 (from 20.08.11 or 21, not really sure) it reboots at random times: it can run fine for over two weeks and reboot twice in 36 hours, always when idling. It's running OMV as well a multiple Docker containers: PiHole, Wireguard, Nginx, Transmission, restic-server and n8n. Is it a common issue? What fixes do you recommend? What should I look for to diagnose the issue? Thank you. Edit: forgot to mention that I'm using SnapRAID and mergerFS, as well as NFS shares.
  11. Hello, today i try a USB C to HDMI Adapter and i get some picture on tv but it´s really unreadable and my log is full of this: rockchip-vop ff8f0000.vop: [drm:vop_plane_atomic_update [rockchipdrm]] *ERROR* Maximum dst width (3840) exceeded it is possible to restrict the resolution in the terminal? i think this error ocure because to high resolution or wrong detection Thank you
  12. hi all. I recently updated my Kurnel and rebooted my Helios64. when it loaded in to OMV i noticed my raid drive is missing. i can still see the disks just not the raid drive . Checking on the array i can see it failed with an error 524. Would love some suggestions on how to repair the array. below is the details i was able to get,.. sorry for the large info dump root@helios64:~# mdadm --run /dev/md0 mdadm: failed to start array /dev/md0: Unknown error 524 root@helios64:~# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Dec 14 16:49:07 2020 Raid Level : raid0 Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Dec 14 16:49:07 2020 State : active, FAILED, Not Started Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Consistency Policy : unknown Name : helios64:Home (local to host helios64) UUID : bc34df85:c451fddc:fdc99dec:5400a7bb Events : 0 Number Major Minor RaidDevice State - 0 0 0 removed - 0 0 1 removed - 0 0 2 removed - 0 0 3 removed - 8 32 0 sync /dev/sdc - 8 0 2 sync /dev/sda - 8 48 1 sync /dev/sdd - 8 16 3 sync /dev/sdb root@helios64:~# ^C root@helios64:~# cat /proc/mdstat Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : inactive sdd[1] sdc[0] sdb[3] sda[2] 19534553088 blocks super 1.2 unused devices: <none> root@helios64:~# mdadm --examine /dev/sd[abcde] /dev/sda: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : bc34df85:c451fddc:fdc99dec:5400a7bb Name : helios64:Home (local to host helios64) Creation Time : Mon Dec 14 16:49:07 2020 Raid Level : raid0 Raid Devices : 4 Avail Dev Size : 11720780976 (5588.90 GiB 6001.04 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : 4efe5ed4:e612db01:c827abcf:30cf9d60 Update Time : Mon Dec 14 16:49:07 2020 Bad Block Log : 512 entries available at offset 8 sectors Checksum : 403cbc68 - correct Events : 0 Chunk Size : 512K Device Role : Active device 2 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdb: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : bc34df85:c451fddc:fdc99dec:5400a7bb Name : helios64:Home (local to host helios64) Creation Time : Mon Dec 14 16:49:07 2020 Raid Level : raid0 Raid Devices : 4 Avail Dev Size : 11720780976 (5588.90 GiB 6001.04 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : 2fa13dfd:7e1c90cc:e086e232:a2f84f2a Update Time : Mon Dec 14 16:49:07 2020 Bad Block Log : 512 entries available at offset 8 sectors Checksum : 60b9f16c - correct Events : 0 Chunk Size : 512K Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : bc34df85:c451fddc:fdc99dec:5400a7bb Name : helios64:Home (local to host helios64) Creation Time : Mon Dec 14 16:49:07 2020 Raid Level : raid0 Raid Devices : 4 Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : 4cee06fa:a32a5035:df544d4a:9a37139a Update Time : Mon Dec 14 16:49:07 2020 Bad Block Log : 512 entries available at offset 8 sectors Checksum : 649123a1 - correct Events : 0 Chunk Size : 512K Device Role : Active device 0 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : bc34df85:c451fddc:fdc99dec:5400a7bb Name : helios64:Home (local to host helios64) Creation Time : Mon Dec 14 16:49:07 2020 Raid Level : raid0 Raid Devices : 4 Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : 52c35a71:983beb7b:eeae2343:83e9fa0f Update Time : Mon Dec 14 16:49:07 2020 Bad Block Log : 512 entries available at offset 8 sectors Checksum : 913e1594 - correct Events : 0 Chunk Size : 512K Device Role : Active device 1 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) root@helios64:~# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 root@helios64:~# mdadm /dev/md0 --assemble /dev/sd[abcde] mdadm: failed to RUN_ARRAY /dev/md0: Unknown error 524 root@helios64:~# mdadm /dev/md0 --assemble --force /dev/sd[abcde] mdadm: /dev/sda is busy - skipping mdadm: /dev/sdb is busy - skipping mdadm: /dev/sdc is busy - skipping mdadm: /dev/sdd is busy - skipping root@helios64:~#
  13. Hello, i did an update of the system (apt dist-upgrade), and after the system doesn't start anymore. I use internal MMC, so I tryed to use last Armbian buster (Armbian_21.02.3_Helios64_buster_current_5.10.21.img) from sd card and it works. I mounted the MMC, fsck and finally chrooted into it and did an apt update and dist-upgrade, I atach the console log, the system hangs on Starting Kernel. helios64_20210514.txt
  14. Dear all, I am highly interested in the Helios64 and registered for the next batch. Here are a few questions, I hope that your guys can answer. The OS is Embedded Debian I assume. Questions are quite simple: 1) Hard drive numbering on startup Under Debian, the hard drives in a RAID array are renumbered on each startup by random: /dev/sda, /dev/sdb, etc ... This means that if drive /dev/sdb fails ... it is not necessarily the second hard drive in the bay. I already lost a RAID server extracting the wrong disc in an array. Then I migrated to a Synology NAS and notices that this problem had been fixed as discs are clearly numbered. This was never fixed in Debian, so I assume the same under armbian? Can you confirm? Example : Let's assume that and array /dev/md0 contains /dev/sda /dev/sdb anb /dev/sdc and /dev/sdd Eject /dev/sda. Reboot. Now your raid array has /dev/sda, /dev/sdb and /dev/sdc and the last disc is absent. In fact, Debian and Linux renumbers the discs on each boot, which is highly insecure as you never know which disc to eject. 2) Virtual machine isolation I am not sure how to isolate a virtual machine under ARM. Is KVM supported? What can used to run several machines and VLAN under the same host. Is that as easy as under x86? 3) Power management I don't like my ARM Synology NAS going completely idle, it takes sometimes 10s to come back from idle. Is there a way to slow down the ARM processor and idle all discs, to make it faster to respond to a query. I would prefer the system to be on NVMe and the processor running on 1 core waiting for connections, but being completely available. In fact I would like the Helios64 be able to run in a low consumption mode, but not completely idle. By the way, what is the Helios64 power consumption in Watts? 4) Team going away Keep-on the good work, you are making the future which is ARM. In a few years, NAS will all have 8core, hardware encoding and only NVMe discs. So just keep designing the new NAS and you will have it manufactured one way or another. Don't give up because ot manufacturing difficulties and rising costs (I am prepared to pay up to 500EUR for a good NAS with plenty of RAM and cores). Thanks in advance, Kellogs
  15. I'm working on a new distro which is an ARM port of slackware64-15.0 beta. The cooked image is here: http://mirrors.aptalaska.net/slackware/slarm64/images/helios64/ This image is built using the build scripts at https://gitlab.com/sndwvs/images_build_kit/-/tree/arm/ The high level view is that it's downloading the kernel from git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git (linux-5.12.y) then applying the patches found at https://gitlab.com/sndwvs/images_build_kit/-/tree/arm/patch/kernel/rk3399-next then builds the kernel with config https://gitlab.com/sndwvs/images_build_kit/-/blob/arm/config/kernel/linux-rk3399-next.config This is very similar to armbian, and the system seems to work just fine at first, but I cannot build ZFS without segfaults or other crashes. I figured it was overheating or something so I added echo 255 > /sys/class/hwmon/hwmon3/pwm1 echo 255 > /sys/class/hwmon/hwmon4/pwm1 which turns the fans on full. The system gets very loud and nowhere near the 100C critical threshold, but when one of the fast cores (4-5) runs at 100% for more than a minute the compiler crashes (no reboots). I started lowering the cpu frequency lower and lower until I could get through a build and the fastest I can set the CPU and be stable is 408mhz: cpufreq-set --cpu 0 --freq 408000 cpufreq-set --cpu 4 --freq 408000 I also tried changing the voltage regulator per another post I found: regulator dev vdd_log regulator value 930000 regulator dev vdd_center regulator value 950000 But that didn't help. Any other ideas on how to make this box stable at more than 408mhz? It would be great to have slackware on it and be able to do things with it. Thanks, Matt
  16. Hello, Followed instructions and installed zfs, rebooted, installed all updated via apt update and apt upgrade, rebooted and situation is like this: root@helios64:~# zpool status The ZFS modules are not loaded. Try running '/sbin/modprobe zfs' as root to load them. root@helios64:~# modprobe zfs modprobe: FATAL: Module zfs not found in directory /lib/modules/5.10.21-rockchip64 root@helios64:~# apt install zfsutils-linux Reading package lists... Done Building dependency tree Reading state information... Done zfsutils-linux is already the newest version (2.0.3-1~bpo10+1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. root@helios64:~# uname -a Linux helios64 5.10.21-rockchip64 #21.02.3 SMP PREEMPT Mon Mar 8 01:05:08 UTC 2021 aarch64 GNU/Linux Any suggestions?
  17. I notice the Helio64 hardware supports WOL, but I can't see anything in the Wiki about it (only Helios4) The Helio64 can draw some power with 5 spinning disks, I would love to have it sleep until needed. Is there any information on setting this up? I can see Autoshutdown plugin for OMV, but there is options for: Shutdown Hibernate Suspend Hybrid Sleep Suspend then Hibernate Does Helios64 support all these methods? Then once the unit is in some form of shutdown or sleep state, how can I make use to WOL to wake it up? Thanks.
  18. I am having the issue that after some time running (like a day). Rebooting results in crashing during kernel loading. Than I have to hardreset the system and it boots. If I reboot right away again, no issue. I used to have quite a few stability issues, which are now all solved with the voltage fix applied. Running stock frequencies with on-demand. Anyone having any ideas? I hope the Kobol team is enjoying some time off though I still see you around. Keep recharging, you really deserve it.
  19. Hi folks I'm running a fresh install of the Debian branch with 5.X Kernel branch. I got 2 new 4TB HD setup with MDAM in RAID 1 mode to store my photo and documents with Syncloud also running on the HELIOS64. I Got also a 2,5 Sata drive mount and share via Samba to my home network. My Helios64 is plugged via the 2,5GB Ethernet port directly to my Router on the 2,5GB port. I didn't fix the hardware bug, but if I understood it's not necessary in my case ( 2,5GB Ethernet on a 2,5GB Ethernet Router) My problem is that on the same router I got an Apple TV (on ethernet port too) with Infuse app on IT. And there is the problem I got big MKV files that play fine for most of them, but all MP4 files lag and freez !!! Before getting the Helios NAS, this same SATA drive was plug via USB2 port of my router but never freeze, only a delay to cache the video file but after everything played fine. Is that a known problem ? Is there a way to check the health of my SATA Drive ? I've read on the wiki that there is an TX offload setting that can be disable? ethtool -K eth1 ??? If I understood it's to avoid that the CPU decode the file and let that job to the network bus. Do you have any recommandation that I should follow to sort this out ? Thank in advance PS: I'm a rookie an linux world but I enjoy it a lot so be kind with command and
  20. It’s been 3 months since we posted on our blog. While we have been pretty active on Armbian/Kobol forum for support and still working at improving the software support and stability, we have been developing in parallel the new iteration of Helios64 around the latest Rockchip SoC RK3568. However things haven’t been progressing as fast as we would have wished. Looking back, 2020 has been a very challenging year to deliver a new product and it took quite a toll on the small team we are. Our energy level is a bit low and we still haven’t really recovered. Now with electronic part prices surge and crazy lead time, it’s even harder to have business visibility in an already challenging market. In light of the above, we decided to go on a full break for the next 2 months, to recharge our battery away from Kobol and come back with a refocused strategy and pumped up energy. Until we are back, we hope you will understand that communication on the different channels (blog, wiki, forum, support email) will be kept to a minimum for the next 2 months. Thanks again all for your support.
  21. I've been trying to connect a USB hdd for a while now, and depending on the SATA disk and the USB<->Sata adapter I've had different behaviours : * 700mA hdd * Adapter A works, can mount and copy * Adapter B is detected, the SATA device too in dmesg, but does not appear in lsblk. * 1A hdd (Seagate Barracuda 2To 2.5'') * Adapter A seems to be detected, the disk tries to start (sde is showing in dmesg) but usb device disappears, reappears etc ("usb disk bootloop") * Adapter B is not even detected in dmesg or lsusb ! Obviously, the 4 configurations perfectly work on a x86 host. What could be going on ? I'm pretty confident this is a USB current limitation. I'm buying a USB-A + USB-A -> USB-C cable to power the disk from an external source too, will see if that helps. If that's the case, this current limitation is HIGHLY PROBLEMATIC, please fix it in a future revision of the board
  22. Hello, we have a strange thing here. Since ... (I don't remember), the tray number 2 is not working. We tried with 3 different disks but each of them, when inserted in tray n2, aren't start. Why ? And how to fix that ? Thank you. PS: other trays (n1, n3, n4, n5) are ok.
  23. Very interested in the Helios64, but can't help but to notice the rockchip cpu in it is 5+ years old at this point. Any plans to upgrade it to a more modern cpu? At this price range I'd probably lean towards an Odroid H2+ with a celeron J1114 chip which has like 2-3x the multicore performance and a similar TDP
  24. Hello guys! Recently I bought a Helios64 NAS and now I am playing around with it. My intention is to set it up as I need it, but I also want to understand how the system works. The last weeks I did a huge effort to read everything about what armbian exactly is, how it works, and also how the boot process works. I would say I have a good overview of the topics right now, but now I am at a point where I need your help. My intention right now is the following: I want to make an automatic backup system, which runs regularly and saves the backups incrementally on an external HDD. The idea is: When I am messing up my system (which I certainly will when playing around), I can copy one of the working backups onto another SD card and boot from that instead. And have a working system again. I had already a look at the nand-sata-install script, and based on that I already have the commands ready to copy the root filesystem. (I am using parted mklabel and parted mkpart for preparing the partition table, then using mkfs.ext4 to create the filesystem, and then the rsync command from the nand-sata-install script (+x option to stop at file system boundaries) to copy the root file system). This is the first step for now. Later, I want to include another step: I will backup my root file system with borg backup into a borg repository. This can store incremental backups very well. And then, when I need a backup, I rsync from one of the borg backups onto my SD card. But even after a exhausting research how the boot process works in general (loading the MBR -> jumping to the boot sector or boot partition -> loading the boot file -> loading the kernel), and trying to understand how the nand-sata-install script handles the U-Boot install, I don't understand how to make my SD card bootable. With hexdump -C -n 512 /dev/mmcblk0 I can see the MBR with the signature "55 aa" as the last two bytes, which tells the BIOS that this card is bootable. Then I see with parted /dev/mmcblk0 unit s print that the first and only partition (primary ext4) starts at sector 32768, where one sector equals 512 bytes. With hexdump, I see that the sectors before the first partition contain actually some data. But I am not sure, if this is garbage, or if this is the boot code from Das U-Boot. Reading this tutorial (https://linux-sunxi.org/Bootable_SD_card) it actually seams that the first sectors of the SD card is the U-Boot binary code written to the SD card with dd. However, I didn't find any *.bin file on armbian in /boot/ directory. Based on this tutorial (https://neosmart.net/wiki/mbr-boot-process/ ) I actually understood how the boot process is working, and that also the first sector(s) of a partition can contain boot code. However, with hexdump on /dev/mmcblk0p1, I saw that the first 1024 bytes of the partition are zero. So my question now is: How can I make my cloned SD card bootable, by either backing up the boot code from the original card and copying it onto the new card, or by re-creating the boot code freshly on the cloned SD card (from my personal PC). I also like to understand, what I am actually doing and why, not just copying commands blindly. Thanks a lot for your help!
  25. While I like having the helios-ups.timer in case of power failure, I don't like that my logs get three lines written to them every 20 seconds. Apr 09 07:43:26 helios64 systemd[1]: Starting Helios64 UPS Action... Apr 09 07:43:26 helios64 systemd[1]: helios64-ups.service: Succeeded. Apr 09 07:43:26 helios64 systemd[1]: Finished Helios64 UPS Action. Does anyone know if there is a way to keep the timer on but not fill the logs this way? Everything I've found about silencing systemd messages is about the output of the command, not the systemd activity itself.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines