Jump to content

Helios64 Support


gprovost

Recommended Posts

This main thread has been locked. Thanks to the Armbian team, we now have our own Kobol club (kinda sub forum) dedicated to our products (Helios64 & Helios4).

So now you are encourage to create individual threads for new issue, discuss feature, share setup, etc...

This will help to improve overall readability.

 

Note : Before asking a question please insure the information is not already available on Helios64 Wiki or addressed in a previous post in this forum.

 

image.thumb.png.e27d5c15e470f8ed67c06b454270479a.png

 

Latest Build :

 

Check : https://dl.armbian.com/helios64/

 

Archived Build :

 

Check : https://archive.armbian.com/helios64/archive/

 

Known Issues :

 

Work-in-Progress images, not all board features are supported yet.

 

Find software support status on our wiki or  our here on the forum

 

 

Link to comment
Share on other sites

  • Werner featured this topic

Kernel panics when I tried to mount a filesystem.

 

the log from serial (since the panic, until reboot):


 

Spoiler

[ 2308.251122] Unable to handle kernel paging request at virtual address ffff8000114ee01a
[ 2308.251834] Mem abort info:
[ 2308.252086]   ESR = 0x96000007
[ 2308.252363]   EC = 0x25: DABT (current EL), IL = 32 bits
[ 2308.252833]   SET = 0, FnV = 0
[ 2308.253108]   EA = 0, S1PTW = 0
[ 2308.253389] Data abort info:
[ 2308.253648]   ISV = 0, ISS = 0x00000007
[ 2308.253990]   CM = 0, WnR = 0
[ 2308.254258] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000000035e3000
[ 2308.254851] [ffff8000114ee01a] pgd=00000000f7fff003, p4d=00000000f7fff003, pud=00000000f7ffe003, pmd=00000000f7ffb003, pte=0000000000000000
[ 2308.255959] Internal error: Oops: 96000007 [#1] PREEMPT SMP
[ 2308.256454] Modules linked in: softdog xt_conntrack xt_MASQUERADE nf_conntrack_netlink xfrm_user xfrm_algo nft_counter xt_addrtype nft_compat nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables nfnetlink br_netfilter bridge governor_performance r8152 snd_soc_hdmi_codec snd_soc_rockchip_i2s leds_pwm panfrost snd_soc_core gpu_sched hantro_vpu(C) rockchip_vdec(C) snd_pcm_dmaengine rockchipdrm rockchip_rga snd_pcm gpio_charger pwm_fan v4l2_h264 dw_mipi_dsi videobuf2_dma_contig snd_timer videobuf2_dma_sg videobuf2_vmalloc dw_hdmi v4l2_mem2mem snd fusb30x(C) analogix_dp videobuf2_memops videobuf2_v4l2 drm_kms_helper soundcore videobuf2_common sg cec videodev rc_core mc drm drm_panel_orientation_quirks cpufreq_dt gpio_beeper dm_mod nfsd auth_rpcgss nfs_acl lockd grace lm75 sunrpc ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod realtek dwmac_rk stmmac_platform stmmac mdio_xpcs adc_keys
[ 2308.264198] CPU: 5 PID: 0 Comm: swapper/5 Tainted: G         C        5.8.13-rockchip64 #20.08.7
[ 2308.264973] Hardware name: Helios64 (DT)
[ 2308.265325] pstate: 80000085 (Nzcv daIf -PAN -UAO BTYPE=--)
[ 2308.265829] pc : check_preemption_disabled+0x2c/0x108
[ 2308.266279] lr : debug_smp_processor_id+0x20/0x30
[ 2308.266696] sp : ffff800011c63dc0
[ 2308.266993] x29: ffff800011c63dc0 x28: ffff0000f6ea6580 
[ 2308.267467] x27: ffff800011a02000 x26: ffff0000f6ea6b40 
[ 2308.267940] x25: 0000000000000000 x24: ffff800011809980 
[ 2308.268412] x23: ffff0000f6ea6580 x22: ffff80001180a000 
[ 2308.268885] x21: 0000000000000000 x20: ffff0000f6ea6580 
[ 2308.269357] x19: ffff8000111ee688 x18: 0000000000000000 
[ 2308.269829] x17: 0000000000000000 x16: 0000000000000000 
[ 2308.270301] x15: 0000000000000000 x14: 0000000000000000 
[ 2308.270773] x13: 0000000000000359 x12: 0000000000000360 
[ 2308.271245] x11: 0000000000000001 x10: 0000000000000a20 
[ 2308.271717] x9 : ffff800011c63e70 x8 : 0000000000000001 
[ 2308.272189] x7 : 0000000000000400 x6 : ffff0000f77c8c40 
[ 2308.272661] x5 : 0000000000000000 x4 : 0000000000000009 
[ 2308.273133] x3 : ffff8000114ee018 x2 : 0000000000000002 
[ 2308.273605] x1 : ffff8000112024b8 x0 : ffff8000e62c5000 
[ 2308.274077] Call trace:
[ 2308.274302]  check_preemption_disabled+0x2c/0x108
[ 2308.274723]  debug_smp_processor_id+0x20/0x30
[ 2308.275115]  fpsimd_save+0x14/0x120
[ 2308.275429]  fpsimd_thread_switch+0x28/0xe0
[ 2308.275805]  __switch_to+0x24/0x1a0
[ 2308.276119]  __schedule+0x398/0x808
[ 2308.276434]  schedule_idle+0x28/0x48
[ 2308.276756]  do_idle+0x184/0x288
[ 2308.277047]  cpu_startup_entry+0x24/0x68
[ 2308.277400]  secondary_start_kernel+0x140/0x178
[ 2308.277808] Code: a9025bf5 aa0003f3 b9401282 d538d080 (b8606875) 
[ 2308.278356] ---[ end trace 9ab3582cc73aa238 ]---
[ 2308.278769] Kernel panic - not syncing: Attempted to kill the idle task!
[ 2308.279362] SMP: stopping secondary CPUs
[ 2309.446382] SMP: failed to stop secondary CPUs 3,5
[ 2309.446808] Kernel Offset: disabled
[ 2309.447121] CPU features: 0x240022,2000600c
[ 2309.447493] Memory Limit: none
[ 2309.447778] Rebooting in 90 seconds..
DDR Version 1.24 20191016
In
soft reset
SRX
channel 0
CS = 0
MR0=0x18
MR4=0x1
MR5=0x1
MR8=0x10
MR12=0x72
MR14=0x72
MR18=0x0
MR19=0x0
MR24=0x8
MR25=0x0
channel 1
CS = 0
MR0=0x18
MR4=0x1
MR5=0x1
MR8=0x10
MR12=0x72
MR14=0x72
MR18=0x0
MR19=0x0
MR24=0x8
MR25=0x0
channel 0 training pass!
channel 1 training pass!
change freq to 416MHz 0,1
Channel 0: LPDDR4,416MHz
Bus Width=32 Col=10 Bank=8 Row=16 CS=1 Die Bus-Width=16 Size=2048MB
Channel 1: LPDDR4,416MHz
Bus Width=32 Col=10 Bank=8 Row=16 CS=1 Die Bus-Width=16 Size=2048MB
256B stride
channel 0
CS = 0
MR0=0x18
MR4=0x1
MR5=0x1
MR8=0x10
MR12=0x72
MR14=0x72
MR18=0x0
MR19=0x0
MR24=0x8
MR25=0x0
channel 1
CS = 0
MR0=0x18
MR4=0x1
MR5=0x1
MR8=0x10
MR12=0x72
MR14=0x72
MR18=0x0
MR19=0x0
MR24=0x8
MR25=0x0
channel 0 training pass!
channel 1 training pass!
channel 0, cs 0, advanced training done
channel 1, cs 0, advanced training done
change freq to 856MHz 1,0
ch 0 ddrconfig = 0x101, ddrsize = 0x40
ch 1 ddrconfig = 0x101, ddrsize = 0x40
pmugrf_os_reg[2] = 0x32C1F2C1, stride = 0xD
ddr_set_rate to 328MHZ
ddr_set_rate to 666MHZ
ddr_set_rate to 928MHZ
channel 0, cs 0, advanced training done
channel 1, cs 0, advanced training done
ddr_set_rate to 416MHZ, ctl_index 0
ddr_set_rate to 856MHZ, ctl_index 1
support 416 856 328 666 928 MHz, current 856MHz
OUT
Boot1: 2019-03-14, version: 1.19
CPUId = 0x0
ChipType = 0x10, 324
SdmmcInit=2 0
BootCapSize=100000
UserCapSize=14910MB
FwPartOffset=2000 , 100000
mmc0:cmd5,20
SdmmcInit=0 0
BootCapSize=0
UserCapSize=30528MB
FwPartOffset=2000 , 0
StorageInit ok = 66115
SecureMode = 0
SecureInit read PBA: 0x4
SecureInit read PBA: 0x404
SecureInit read PBA: 0x804
SecureInit read PBA: 0xc04
SecureInit read PBA: 0x1004
SecureInit read PBA: 0x1404
SecureInit read PBA: 0x1804
SecureInit read PBA: 0x1c04
SecureInit ret = 0, SecureMode = 0
atags_set_bootdev: ret:(0)
GPT 0x3380ec0 signature is wrong
recovery gpt...
GPT 0x3380ec0 signature is wrong
recovery gpt fail!
LoadTrust Addr:0x4000
No find bl30.bin
No find bl32.bin
Load uboot, ReadLba = 2000
Load OK, addr=0x200000, size=0xded88
RunBL31 0x40000
NOTICE:  BL31: v1.3(debug):42583b6
NOTICE:  BL31: Built : 07:55:13, Oct 15 2019
NOTICE:  BL31: Rockchip release version: v1.1
INFO:    GICv3 with legacy support detected. ARM GICV3 driver initialized in EL3
INFO:    Using opteed sec cpu_context!
INFO:    boot cpu mask: 0
INFO:    plat_rockchip_pmu_init(1190): pd status 3e
INFO:    BL31: Initializing runtime services
WARNING: No OPTEE provided by BL2 boot loader, Booting device without OPTEE initialization. SMC`s destined for OPTEE will return SMC_UNK
ERROR:   Error initializing runtime service opteed_fast
INFO:    BL31: Preparing for EL3 exit to normal world
INFO:    Entry point address = 0x200000
INFO:    SPSR = 0x3c9


U-Boot 2020.07-armbian (Sep 23 2020 - 17:44:09 +0200)

 

Edited by Werner
Add spoiler
Link to comment
Share on other sites

So i copy my questions from the comments area:

 

Topic 1 -> write directly to eMMC

 

Yesterday I tried to write directly to the eMMC and wanted to achieve this via the Recovery Mode.
In the description there is no reference to the "Jumper" area that you have to set the P13 jumper to use the mode.
This was the only way my Windows PC recognized anything at all, after that I had to install the appropriate driver with the help of the "RK_DriverAssitant".
The matching files can be found here at github...
https://github.com/rockchip-linux/tools/tree/master/windows

And exactly up to here I came, now I miss the hint which image files I have to flash.
With the RKDevTool you can not flash the offered image file just like that.

Can you give a hint how to flash the eMMC?
I could write a little instruction for you and all others how it works.

 

 

Topic 2 -> install/boot from eMMC

 

I managed to transfer my microSD image via "nand-sata-install" to the eMMC and it even boots from it see here...

Welcome message with "Usage of /" -> eMMC 

Welcome to Armbian 20.08.7 Buster with Linux 5.8.11-rockchip64

No end-user support: work in progress

System load:   2%           	Up time:       9 min
Memory usage:  4% of 3.71G  	IP:            192.168.180.5
CPU temp:      52°C           	Usage of /:    19% of 15G

Last login: Mon Oct  5 13:32:05 2020 from 192.168.180.83

 

my storage an SanDisk Ultra 32GB microSD (mmcblk0) and the eMMC with 16GB (mmcblk1)...

root@helios64:~# fdisk -l
Disk /dev/mmcblk0: 29.7 GiB, 31914983424 bytes, 62333952 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8043f398

Device         Boot Start      End  Sectors  Size Id Type
/dev/mmcblk0p1      32768 61710591 61677824 29.4G 83 Linux


Disk /dev/mmcblk1: 14.6 GiB, 15634268160 bytes, 30535680 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x104810f2

Device         Boot Start      End  Sectors  Size Id Type
/dev/mmcblk1p1      32768 30230303 30197536 14.4G 83 Linux

 

here you see the selected boot partition ...

root@helios64:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.8G     0  1.8G   0% /dev
tmpfs           381M  5.2M  375M   2% /run
/dev/mmcblk1p1   15G  2.5G   11G  19% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
tmpfs           1.9G  4.0K  1.9G   1% /tmp
folder2ram      1.9G  4.3M  1.9G   1% /var/log
folder2ram      1.9G     0  1.9G   0% /var/tmp
folder2ram      1.9G  364K  1.9G   1% /var/lib/openmediavault/rrd
folder2ram      1.9G  728K  1.9G   1% /var/spool
folder2ram      1.9G   13M  1.9G   1% /var/lib/rrdcached
folder2ram      1.9G  4.0K  1.9G   1% /var/lib/monit
folder2ram      1.9G  1.3M  1.9G   1% /var/cache/samba
tmpfs           381M     0  381M   0% /run/user/0

 

For this I set the jumper P11, which skips the SPI Flash and switches directly to the eMMC.
But to boot from the eMMC I have to leave the microSD with the image installed, otherwise the board won't start.
Is there a solution to remove the microSD completely?
Unfortunately the change of the rootdev variable in /boot/armbianEnv.txt did not work or did not cause anything.

 

 

Topic 3 -> OMV install via armbian-config ---> FIXED

 

Unfortunately installing OMV via "armbian-config" never works for me and always resulted in a crash of the system, so I had to flash the SD again.
In the Helios4 section I read the hint from the team that it should be buggy and you should do the following...

sudo apt-get remove chrony

wget -O - https://github.com/OpenMediaVault-Plugin-Developers/installScript/raw/master/install | sudo bash

 

with the command "apt-get remove chrony" I had to use my last image installation, because chrony did not start at startup and so the wrong date/time was stored in the system with "date".
A "apt-get remove chrony" and "apt-get install chrony" fixed this.
But I saw that OMV installed chrony during the installation, so I deleted chrony again as recommended and the OMV installation ran properly for the first time.

 

Solution:

Works perfectly with firmware from 20.08.8 on directly via armbian-config.

 

 

Topic 4 -> USB-C Connector ---> FIXED

 

It is not possible to connect the included USB-C cable to the USB port when the backpanel is on.
The panel produces a too wide distance, so the connector does not fit 100%.
So I have removed the board from the case and work without the case until the system is ready.
You might want to recommend this before assembling the system, there are just too many mistakes/unclearties when you have assembled everything.

 

Solution:

You have to remove the black plastic around the USB-C plug on the supplied cable carefully by about 2mm, so that you can still see the silver contacts from the side when you plug the USB-C plug into the USB-C connector.

Link to comment
Share on other sites

Came here exactly to ask about the issues mentioned in the second post. Glad to know a these are already being addressed. Thank you @gprovost!

 

5 hours ago, flower said:

Is it possible to update through apt or would you recommend to flash that new image? 

Also curious about this one. It would be somewhat frustrating to have to reconfigure everything from scratch again.

 

Link to comment
Share on other sites

4 hours ago, antsu said:

Also curious about this one. It would be somewhat frustrating to have to reconfigure everything from scratch again.


After 
https://github.com/armbian/build/commit/d18323fcc358ddd5bd0836c4853cbc281b6293c6
images were updated to v20.08.8 while populating update packages around our servers takes up to 24h.

I don't know if this will solve problems mentioned above.

Link to comment
Share on other sites

And unfortunately already the next problem.
I couldn't start Docker successfully so far, so far all attempts to find a solution have failed.

Here is the error message when I try to start Docker or the message as not successfully installed...

 

root@helios64:~# apt-get install docker-ce docker-ce-cli containerd.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
Recommended packages:
  cgroupfs-mount | cgroup-lite pigz libltdl7
The following NEW packages will be installed:
  containerd.io docker-ce docker-ce-cli
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/66.8 MB of archives.
After this operation, 338 MB of additional disk space will be used.
Selecting previously unselected package containerd.io.
(Reading database ... 56237 files and directories currently installed.)
Preparing to unpack .../containerd.io_1.3.7-1_arm64.deb ...
Unpacking containerd.io (1.3.7-1) ...
Selecting previously unselected package docker-ce-cli.
Preparing to unpack .../docker-ce-cli_5%3a19.03.13~3-0~debian-buster_arm64.deb ...
Unpacking docker-ce-cli (5:19.03.13~3-0~debian-buster) ...
Selecting previously unselected package docker-ce.
Preparing to unpack .../docker-ce_5%3a19.03.13~3-0~debian-buster_arm64.deb ...
Unpacking docker-ce (5:19.03.13~3-0~debian-buster) ...
Setting up containerd.io (1.3.7-1) ...
Setting up docker-ce-cli (5:19.03.13~3-0~debian-buster) ...
Setting up docker-ce (5:19.03.13~3-0~debian-buster) ...
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
invoke-rc.d: initscript docker, action "start" failed.
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: activating (auto-restart) (Result: exit-code) since Mon 2020-10-05 20:43:13 CEST; 41ms ago
     Docs: https://docs.docker.com
  Process: 7544 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
 Main PID: 7544 (code=exited, status=1/FAILURE)
dpkg: error processing package docker-ce (--configure):
 installed docker-ce package post-installation script subprocess returned error exit status 1
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for systemd (241-7~deb10u4) ...
Errors were encountered while processing:
 docker-ce
E: Sub-process /usr/bin/dpkg returned an error code (1)

 

 

systemctl status docker.service

 

root@helios64:~# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Mon 2020-10-05 20:43:28 CEST; 41s ago
     Docs: https://docs.docker.com
  Process: 7953 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
 Main PID: 7953 (code=exited, status=1/FAILURE)

Oct 05 20:43:28 helios64 systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
Oct 05 20:43:28 helios64 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
Oct 05 20:43:28 helios64 systemd[1]: Stopped Docker Application Container Engine.
Oct 05 20:43:28 helios64 systemd[1]: docker.service: Start request repeated too quickly.
Oct 05 20:43:28 helios64 systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 05 20:43:28 helios64 systemd[1]: Failed to start Docker Application Container Engine.

 

 

journalctl -xe

 

root@helios64:~# journalctl -xe
-- Subject: A start job for unit docker.socket has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.socket has begun execution.
--
-- The job identifier is 2085.
Oct 05 20:43:28 helios64 systemd[1]: Listening on Docker Socket for the API.
-- Subject: A start job for unit docker.socket has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.socket has finished successfully.
--
-- The job identifier is 2085.
Oct 05 20:43:28 helios64 systemd[1]: docker.service: Start request repeated too quickly.
Oct 05 20:43:28 helios64 systemd[1]: docker.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit docker.service has entered the 'failed' state with result 'exit-code'.
Oct 05 20:43:28 helios64 systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: A start job for unit docker.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.service has finished with a failure.
--
-- The job identifier is 2011 and the job result is failed.
Oct 05 20:43:28 helios64 systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit docker.socket has entered the 'failed' state with result 'service-start-limit-hit'.
Oct 05 20:43:49 helios64 avahi-daemon[1245]: Record [helios64\032-\032Web\032control\032panel._http._tcp.local        IN        TXT "path=/index.php" ; ttl=4500] not fitting in le
Oct 05 20:43:49 helios64 avahi-daemon[1245]: Record [helios64\032-\032Web\032control\032panel._http._tcp.local        IN        SRV 0 0 80 helios64.local ; ttl=120] not fitting in
Oct 05 20:43:49 helios64 avahi-daemon[1245]: Record [helios64.local        IN        AAAA 2001:16b8:2874:7500:6662:66ff:fed0:216 ; ttl=120] not fitting in legacy unicast packet, d
Oct 05 20:43:49 helios64 avahi-daemon[1245]: Record [helios64.local        IN        AAAA fd00::6662:66ff:fed0:216 ; ttl=120] not fitting in legacy unicast packet, dropping.
Oct 05 20:43:49 helios64 avahi-daemon[1245]: Record [helios64.local        IN        A 192.168.180.5 ; ttl=120] not fitting in legacy unicast packet, dropping.
Oct 05 20:44:01 helios64 CRON[8051]: pam_unix(cron:session): session opened for user root by (uid=0)
Oct 05 20:44:01 helios64 CRON[8052]: (root) CMD (/usr/sbin/omv-ionice >/dev/null 2>&1)
Oct 05 20:44:01 helios64 CRON[8051]: pam_unix(cron:session): session closed for user root

 

Link to comment
Share on other sites

10 minutes ago, TDCroPower said:

And unfortunately already the next problem.
I couldn't start Docker successfully so far, so far all attempts to find a solution have failed.

Here is the error message when I try to start Docker or the message as not successfully installed...


I just tested Docker on Nanopi M4 (similar hardware - boot logs) with installation over armbian-config -> software -> softy -> Docker

 

Spoiler

root@nanopim4:~# systemctl status docker.service
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2020-10-05 19:02:20 UTC; 1min 4s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 91078 (dockerd)
      Tasks: 15
     Memory: 39.0M
     CGroup: /system.slice/docker.service
             └─91078 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Oct 05 19:02:16 nanopim4 dockerd[91078]: time="2020-10-05T19:02:16.096426425Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 05 19:02:19 nanopim4 dockerd[91078]: time="2020-10-05T19:02:19.602900593Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Oct 05 19:02:19 nanopim4 dockerd[91078]: time="2020-10-05T19:02:19.602960675Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Oct 05 19:02:19 nanopim4 dockerd[91078]: time="2020-10-05T19:02:19.603299588Z" level=info msg="Loading containers: start."
Oct 05 19:02:20 nanopim4 dockerd[91078]: time="2020-10-05T19:02:20.243371551Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip>
Oct 05 19:02:20 nanopim4 dockerd[91078]: time="2020-10-05T19:02:20.530924661Z" level=info msg="Loading containers: done."
Oct 05 19:02:20 nanopim4 dockerd[91078]: time="2020-10-05T19:02:20.606154243Z" level=info msg="Docker daemon" commit=4484c46 graphdriver(s)=overlay2 version=19.03.13
Oct 05 19:02:20 nanopim4 dockerd[91078]: time="2020-10-05T19:02:20.606769361Z" level=info msg="Daemon has completed initialization"
Oct 05 19:02:20 nanopim4 dockerd[91078]: time="2020-10-05T19:02:20.723698041Z" level=info msg="API listen on /run/docker.sock"
Oct 05 19:02:20 nanopim4 systemd[1]: Started Docker Application Container Engine.

 


This problem would probably be a question for OMV forums. They change the system beyond recognition (Its not clean Debian anymore once you install OMV) and their bugs we don't even try to fix. They do that.

Link to comment
Share on other sites

22 minutes ago, Igor said:


After 
https://github.com/armbian/build/commit/d18323fcc358ddd5bd0836c4853cbc281b6293c6
images were updated to v20.08.8 while populating update packages around our servers takes up to 24h.

I don't know if this will solve problems mentioned above.

 

I'm looking forward to see how far I notice the changes, I'm already on the "20.08.7".

 

Is it normal that under "uname -a" the used version is not shown?

 

Welcome to Armbian 20.08.7 Buster with Linux 5.8.11-rockchip64

No end-user support: work in progress

System load:   2%           	Up time:       54 min
Memory usage:  5% of 3.71G  	IP:            192.168.180.5
CPU temp:      65°C           	Usage of /:    16% of 15G

[ General system configuration (beta): armbian-config ]

Last login: Mon Oct  5 20:21:08 2020 from 192.168.180.83
root@helios64:~# uname -r
5.8.11-rockchip64
root@helios64:~# uname -a
Linux helios64 5.8.11-rockchip64 #20.08.4 SMP PREEMPT Wed Sep 23 17:51:13 CEST 2020 aarch64 GNU/Linux
root@helios64:~#

 

Link to comment
Share on other sites

1 hour ago, Igor said:


After 
https://github.com/armbian/build/commit/d18323fcc358ddd5bd0836c4853cbc281b6293c6
images were updated to v20.08.8 while populating update packages around our servers takes up to 24h.

I don't know if this will solve problems mentioned above.

thank you :)

 

with this version i am able to build my raid10 array - its already at 5%.

the old one locked at 0.3% - raid5 and 6 worked fine though (throuch cmd and omv)

 

i have tried docker with omv which worked fine on the old image (omv installed through softly, docker installed through omv).

but i plan to go without omv.

Link to comment
Share on other sites

3 hours ago, TDCroPower said:

Is it normal that under "uname -a" the used version is not shown?


That's weird. You should be on 5.8.12 ... in a day on 5.8.13 ... you have done apt update and apt upgrade? Is apt pooling down from our servers?

 

2 hours ago, flower said:

but i plan to go without omv.


OMV is also way too bulky for my taste. You need Debian or Ubuntu, nfsd, perhaps samba and for apps you use Docker.

Link to comment
Share on other sites

11 minutes ago, Igor said:


That's weird. You should be on 5.8.12 ... in a day on 5.8.13 ... you have done apt update and apt upgrade? Is apt pooling down from our servers?


i will flash later the new fresh image, the image file is ready from armbian download site but not via apt-get.

I have updated with „apt-get update && apt-get upgrade“ + „reboot“

Link to comment
Share on other sites

I have tested both the legacy and current images, and I can definitely see some stability improvements with 20.08.8, but unfortunately none of the images is able to fulfil my needs.

The "current" image, with Linux 5.8.13, seems to be the most stable so far, no crashes and everything just works, except I can't for the life of me get zfs-dkms to install, which is a deal breaker for me.

On "legacy", ZFS builds fine, but this image has a worrying bug that's not present in "current": Trying to shutdown always causes a kernel panic,  which causes the board to remain on with fans at full speed, which means in the occasion of a power cut the system would stay on until running out of battery, instead of performing a graceful shutdown.

Link to comment
Share on other sites

Such short feedback on the new 20.08.8...

Runs much better/stable so far.
Installing OMV and Docker via armbian-config worked fine... perfect!

Writing the image from microSD to eMMC with "nand-sata-install" or "armbian-config" >> System >> Install also worked, but I couldn't boot from eMMC anymore.

Link to comment
Share on other sites

ZFS Builds Fine?

ZFS appears as an option in storage but I am getting all sorts of load errors

 

My experience is not as fruitful. Most likely due to my naivete.

Could you point me in the direction a guide?

I am shooting blanks

 

Thank You

Link to comment
Share on other sites

6 hours ago, antsu said:

The "current" image, with Linux 5.8.13, seems to be the most stable so far, no crashes and everything just works, except I can't for the life of me get zfs-dkms to install, which is a deal breaker for me.

 

*** ZFS Version: zfs-0.8.3-1ubuntu12.4
*** Compatible Kernels: 2.6.32 - 5.4

For 5.8.y you need at least 0.8.4 from sourceshttps://github.com/openzfs/zfs/issues

 

DKMS on Debian / Ubuntu is simply too old. Install attached .deb package (i tested this on Ubuntu Focal) and it will work:
 

Spoiler

sudo armbian-config main=Software selection=Headers_install
dpkg -i zfs-dkms_0.8.4-1ubuntu11_all.deb 
(Reading database ... 66592 files and directories currently installed.)
Preparing to unpack zfs-dkms_0.8.4-1ubuntu11_all.deb ...

------------------------------
Deleting module version: 0.8.3
completely from the DKMS tree.
------------------------------
Done.
Unpacking zfs-dkms (0.8.4-1ubuntu11) over (0.8.3-1ubuntu12.4) ...
Setting up zfs-dkms (0.8.4-1ubuntu11) ...
Loading new zfs-0.8.4 DKMS files...
Building for 5.8.13-rockchip64
Building initial module for 5.8.13-rockchip64
Done.

zavl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

znvpair.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

zunicode.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

zcommon.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

zfs.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

icp.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

zlua.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

spl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

depmod.....

DKMS: install completed.
Processing triggers for initramfs-tools (0.136ubuntu6.3) ...
update-initramfs: Generating /boot/initrd.img-5.8.13-rockchip64
update-initramfs: Converting to u-boot format

modprobe zfs
[319920.214635] ZFS: Loaded module v0.8.4-1ubuntu11, ZFS pool version 5000, ZFS filesystem version 5

 

 

zfs-dkms_0.8.4-1ubuntu11_all.deb

Link to comment
Share on other sites

To summarize current status

 


 

Feature Support Status
Feature Legacy Remarks Current Remarks
Shutdown Issue Failed to shutdown PMIC and trigger crash, HDD already parked OK  
Reboot Issue Similar like shutdown but Watchdog trigger the reboot so it appear successful OK  
Suspend to RAM Not Supported Yet Failed to resume operation after wake up Not Supported Yet USB host controller refuse to enter suspend mode
2.5G Ethernet OK   Performance Issue Slightly improved with tx offload disabled
Main Power/UPS Status OK Status can be read from sysfs OK Status can be read from sysfs
Battery Charging Status Not Supported   OK Charging and Full charge can be read from sysfs
UPS configuration Not Supported Yet Need user-space tool to monitor power status and trigger shutdown Not Supported Yet Need user-space tool to monitor power status and trigger shutdown
USB Type C - Host OK   Not Supported Yet There are some issue with fusb302 driver and USB Host Controller driver
USB Type C - Gadget OK   Not Supported Yet There are some issue with fusb302 driver and USB Controller driver
USB Type C - DisplayPort OK   Not Supported Yet There are some issue with fusb302 driver and DisplayPort alternate driver
eMMC Install Not Supported Yet The bootloader unable to load rootfs from eMMC Not Supported Yet The bootloader unable to load rootfs from eMMC
SPI Boot Not Supported Yet   Not Supported Yet  
Recovery key Not Supported Yet System can enter maskrom mode but need special OS image and rkdevelop Not Supported Yet System can enter maskrom mode but need special OS image and rkdevelop

 

We are still working on eMMC issue.

 

 

Edited by aprayoga
Link to comment
Share on other sites

18 hours ago, TDCroPower said:

Topic 4 -> USB-C Connector

 

It is not possible to connect the included USB-C cable to the USB port when the backpanel is on.
The panel produces a too wide distance, so the connector does not fit 100%.
So I have removed the board from the case and work without the case until the system is ready.

 

I can confirm the issue with the USB-C cable. It does not fit into the USB-C port (the reason may just be the additional layer introduced by the label around the ports).

The issue can be resolved by cutting away about 0.5 mm of the plastic around the plug at the end of the USB cable. It will then easily fit into the port.

 

Otherwise I am very impressed by Helios64 - it is very well engineered - my congratulations to Kobol !

 

I am using the latest Armbian Buster image (Linux helios64 5.8.13-rockchip64 #20.08.8 SMP PREEMPT Mon Oct 5 15:59:02 CEST 2020 aarch64 GNU/Linux) - no issues so far in my use cases.

Link to comment
Share on other sites

Hi,

 

I have some questions I don't find in the wiki yet

 

1. boot : what partitioning schema does support UBOOT: GPT or MBR?

 

2. Kernel: what patches are applied to vanilla kernel 5.8.13? I've already got :  add-board-helios64.patch  helios64-remove-pcie-ep-gpios.patch  zzz-0005-remove-overclock-from-helios64.patch

For now I'll using the binaries from the latestst image.

 

 

Link to comment
Share on other sites

26 minutes ago, alchemist said:

1. boot : what partitioning schema does support UBOOT: GPT or MBR?


AFAIK both are supported but default is MBR

 

26 minutes ago, alchemist said:

2. Kernel: what patches are applied to vanilla kernel 5.8.13? I've already got :  add-board-helios64.patch  helios64-remove-pcie-ep-gpios.patch  zzz-0005-remove-overclock-from-helios64.patch

For now I'll using the binaries from the latestst image.

 

https://github.com/armbian/build/tree/master/patch/kernel/rockchip64-current

Link to comment
Share on other sites

Is there a way to update from legacy to current? 

 

I am on legacy atm and have nearly finished that setup. 

As soon as 2.5gbit perf problems are solved i would like to switch though. 

 

If not i would reinstall now with current and hope the perf problem will solve itself over time. 

 

Btw i can only second ebin-dev. I am really impressed by the quality of this board and case. I really love it. 

Link to comment
Share on other sites

4 hours ago, Igor said:

 

*** ZFS Version: zfs-0.8.3-1ubuntu12.4
*** Compatible Kernels: 2.6.32 - 5.4

For 5.8.y you need at least 0.8.4 from sourceshttps://github.com/openzfs/zfs/issues

 

DKMS on Debian / Ubuntu is simply too old. Install attached .deb package (i tested this on Ubuntu Focal) and it will work:
 

  Reveal hidden contents


sudo armbian-config main=Software selection=Headers_install
dpkg -i zfs-dkms_0.8.4-1ubuntu11_all.deb 
(Reading database ... 66592 files and directories currently installed.)
Preparing to unpack zfs-dkms_0.8.4-1ubuntu11_all.deb ...

------------------------------
Deleting module version: 0.8.3
completely from the DKMS tree.
------------------------------
Done.
Unpacking zfs-dkms (0.8.4-1ubuntu11) over (0.8.3-1ubuntu12.4) ...
Setting up zfs-dkms (0.8.4-1ubuntu11) ...
Loading new zfs-0.8.4 DKMS files...
Building for 5.8.13-rockchip64
Building initial module for 5.8.13-rockchip64
Done.

zavl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

znvpair.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

zunicode.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

zcommon.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

zfs.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

icp.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

zlua.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

spl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.13-rockchip64/updates/dkms/

depmod.....

DKMS: install completed.
Processing triggers for initramfs-tools (0.136ubuntu6.3) ...
update-initramfs: Generating /boot/initrd.img-5.8.13-rockchip64
update-initramfs: Converting to u-boot format

modprobe zfs
[319920.214635] ZFS: Loaded module v0.8.4-1ubuntu11, ZFS pool version 5000, ZFS filesystem version 5

 

 

zfs-dkms_0.8.4-1ubuntu11_all.deb 1.81 MB · 2 downloads

Thank you. To be clear, I did try to compile the latest ZFS from source, but ran into some errors that my internet searches suggest have to do with the GCC version in Debian Buster. I was able to get it to build successfully with the Debian Bullseye image (which as far as I can tell uses GCC 10), but then there's no OMV...

I'll give that package a shot, thanks again.

Link to comment
Share on other sites

1 minute ago, antsu said:

I'll give that package a shot, thanks again.


I already throw both to our repository (buster and focal), but it will take a day to come to package database.

Link to comment
Share on other sites

@IgorI had the same error with the package you posted as I had when manually compiling from source (which again, seems to be related to the GCC version):
 

  MODPOST /var/lib/dkms/zfs/0.8.4/build/module/Module.symvers
ERROR: modpost: "__stack_chk_guard" [/var/lib/dkms/zfs/0.8.4/build/module/zfs/zfs.ko] undefined!
ERROR: modpost: "__stack_chk_guard" [/var/lib/dkms/zfs/0.8.4/build/module/zcommon/zcommon.ko] undefined!
ERROR: modpost: "__stack_chk_guard" [/var/lib/dkms/zfs/0.8.4/build/module/unicode/zunicode.ko] undefined!
ERROR: modpost: "__stack_chk_guard" [/var/lib/dkms/zfs/0.8.4/build/module/spl/spl.ko] undefined!
ERROR: modpost: "__stack_chk_guard" [/var/lib/dkms/zfs/0.8.4/build/module/nvpair/znvpair.ko] undefined!
ERROR: modpost: "__stack_chk_guard" [/var/lib/dkms/zfs/0.8.4/build/module/lua/zlua.ko] undefined!
ERROR: modpost: "__stack_chk_guard" [/var/lib/dkms/zfs/0.8.4/build/module/icp/icp.ko] undefined!
ERROR: modpost: "__stack_chk_guard" [/var/lib/dkms/zfs/0.8.4/build/module/avl/zavl.ko] undefined!
make[4]: *** [scripts/Makefile.modpost:111: /var/lib/dkms/zfs/0.8.4/build/module/Module.symvers] Error 1
make[4]: *** Deleting file '/var/lib/dkms/zfs/0.8.4/build/module/Module.symvers'
make[3]: *** [Makefile:1665: modules] Error 2
make[3]: Leaving directory '/usr/src/linux-headers-5.8.13-rockchip64'
make[2]: *** [Makefile:30: modules] Error 2
make[2]: Leaving directory '/var/lib/dkms/zfs/0.8.4/build/module'
make[1]: *** [Makefile:808: all-recursive] Error 1
make[1]: Leaving directory '/var/lib/dkms/zfs/0.8.4/build'
make: *** [Makefile:677: all] Error 2

 

Just to test it, I've added the Debian testing repo to the Buster image and installed build-essential from there, which contains GCC 10.  With GCC10 your package installs successfully, but then the testing repo breaks OMV compatibility.

Guess I'll just stay in legacy for the time being.

Link to comment
Share on other sites

7 hours ago, Igor said:

This one works "even less" (if that makes sense) since it's missing this patch and fails at an earlier stage in the compilation process. (With the patch applied it fails at the same point I mentioned in my previous post).

 

Unrelated, but I should also note that today, after configuring my Helios64 with the legacy Armbian image and letting it run for a few hours with a moderate load, it crashed again with a kernel panic. Unfortunately I wasn't able to capture the panic messages as I had iftop open on the serial console at the time and it all mixed together beautifully.

Link to comment
Share on other sites

3 hours ago, antsu said:

This one works "even less" (if that makes sense) since it's missing this patch and fails at an earlier stage in the compilation process. (With the patch applied it fails at the same point I mentioned in my previous post).

 

Unrelated, but I should also note that today, after configuring my Helios64 with the legacy Armbian image and letting it run for a few hours with a moderate load, it crashed again with a kernel panic. Unfortunately I wasn't able to capture the panic messages as I had iftop open on the serial console at the time and it all mixed together beautifully.

So with Armbian 20.08.8 LEGACY,  the system still crashed? Let us know if you managed to get the log and a way to reproduce the crash.

 

 

3 hours ago, TDCroPower said:

Did the team change anything about the boot variant with the 20.08.8?
With the previous version I was able to boot from the eMMC with a detour and now I can't boot anymore, although the image was transferred via nand-sata-install!

There is a change to rename device tree to follow mainline device tree naming.

from /boot/rockchip/rk3399-helios64.dtb to /boot/rockchip/rk3399-kobol-helios64.dtb

 

Did you use fresh image of 20.08.8 then transfer to eMMC via nand-sata-install?

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines