Jump to content

Crazy instability :(


Benjamin Arntzen

Recommended Posts

Hi, there already some threads opened for the same topic, so not very efficient if things are spread all over the place.

 

You can try one of the following 2 tweaks :

 

1/ Change CPU governor to Performance, and optionally reduce the CPU speed to keep the system run cool.

 

armbian-config > System > CPU

 

Minimum CPU speed = 1200000

Maximum CPU speed = 1200000

CPU governor = performance

 

So far this tweak is the one that showed the best result on instability linked to DVFS.

 

2/ New tweak for which we are collecting feedback, for now to be tested without the above tweak,

 

Add following lines to the beginning of /boot/boot.cmd

 

regulator dev vdd_log
regulator value 930000
regulator dev vdd_center
regulator value 950000

 

run

 

mkimage -C none -A arm -T script -d /boot/boot.cmd /boot/boot.scr

 

and reboot. Verify whether it can improve stability.

 

 

Thanks for giving us feedback on how each of the above tweaks impact stability of your setup.

 

Link to comment
Share on other sites

Mhh so (per other thread) i tried 2nd proposal (without luck : i still have freeze, as stated in other thread) BUT nothing was said about reverting the CPU governance (so i was still in "performance" mode with same min/max values). To what should i revert the CPU gorvenance values ? (min possible value / max possible value + powersave ?)

Link to comment
Share on other sites

Could you guys update to latest  kernel (linux-image-current-rockchip64_21.02.3_arm64 / LK 5.10.21)  and revert to ondemand governor with the full range of frequency in case you were on performance mode.

 

Also remove any vdd voltage tweak you might have put in /boot/boot.cmd. If you had vdd tweak you will need to rebuild u-boot boot script after removing the lines in /boot/boot.cmd

 

mkimage -C none -A arm -T script -d /boot/boot.cmd /boot/boot.scr

 

Let us know how the new kernel improves the stability.

Link to comment
Share on other sites

Sorry if I'm hijacking this thread, but the title fits the problem, and I didn't want to create yet another thread about stability issues.

I'm also having similar problems with my Helios64. It will consistently crash and reboot when I/O is stressed. The two most common scenarios where this happens is when running an Rsync job that I use to sync my H64 with my remote backup NAS (a Helios4 that still works like a champ), and when performing a ZFS scrub. In one isolated occasion, I was also able to trigger a crash by executing an Ansible playbook that made several VMs generate I/O activity simultaneously (the VMs disks are served by the H64).

Since the Rsync crash is consistent and predictable, I managed to capture the crash output from the serial console in two occasions and under different settings, listed below:

 

Kernel 5.10.16, performance governor, frequency locked to 1.2GHz

Spoiler

[227720.764931] Insufficient stack space to handle exception!
[227720.764934] ESR: 0x96000047 -- DABT (current EL)
[227720.764935] FAR: 0xffff800011e77fd0
[227720.764936] Task stack:     [0xffff800011e78000..0xffff800011e7c000]
[227720.764938] IRQ stack:      [0xffff800011c10000..0xffff800011c14000]
[227720.764939] Overflow stack: [0xffff0000f77b22b0..0xffff0000f77b32b0]
[227720.764941] CPU: 5 PID: 38 Comm: ksoftirqd/5 Tainted: P         C OE     5.10.16-rockchip64 #21.02.2
[227720.764942] Hardware name: Helios64 (DT)
[227720.764943] pstate: 40000085 (nZcv daIf -PAN -UAO -TCO BTYPE=--)
[227720.764944] pc : kfree+0x4/0x480
[227720.764945] lr : __free_slab+0xac/0x1a8
[227720.764946] sp : ffff800011e78060
[227720.764948] x29: ffff800011e78060 x28: 0000000000200001
[227720.764951] x27: fffffe00012d3c80 x26: ffff0000534f2f80
[227720.764954] x25: ffff000040001700 x24: fffffffffffff000
[227720.764957] x23: 0000000000000000 x22: 0000000000200000
[227720.764959] x21: 0000000000000001 x20: ffff000040000c00
[227720.764962] x19: fffffe00012d3c80 x18: 0000000000000000
[227720.764964] x17: 0000000000000000 x16: 0000000000000000
[227720.764967] x15: 00000bd82b5ac7bf x14: 0000000000000001
[227720.764969] x13: 00000000f0000000 x12: 0000000000000001
[227720.764972] x11: 00000000f0000000 x10: 0000000000000001
[227720.764975] x9 : fffffe0001d823c8 x8 : 0000000000000001
[227720.764978] x7 : ffff000040000c20 x6 : 0000000000000000
[227720.764980] x5 : 0000000000000000 x4 : ffff000040001700
[227720.764983] x3 : 0000000000000020 x2 : ffff0000403b9d00
[227720.764985] x1 : 1ffff00000000000 x0 : ffff0000534f0f00
[227720.764988] Kernel panic - not syncing: kernel stack overflow
[227720.764990] CPU: 5 PID: 38 Comm: ksoftirqd/5 Tainted: P         C OE     5.10.16-rockchip64 #21.02.2
[227720.764990] Hardware name: Helios64 (DT)
[227720.764992] Call trace:
[227720.764993]  dump_backtrace+0x0/0x200
[227720.764994]  show_stack+0x18/0x68
[227720.764995]  dump_stack+0xcc/0x124
[227720.764996]  panic+0x174/0x374
[227720.764997]  nmi_panic+0x64/0x98
[227720.764997]  handle_bad_stack+0x124/0x150
[227720.764999]  __bad_stack+0x90/0x94
[227720.765000]  kfree+0x4/0x480
[227720.765001]  discard_slab+0x70/0xb0
[227720.765002]  __slab_free+0x394/0x3e0
[227720.765003]  kfree+0x470/0x480
[227720.765004]  __free_slab+0xac/0x1a8
[227720.765005]  discard_slab+0x70/0xb0
[227720.765006]  __slab_free+0x394/0x3e0
[227720.765007]  kfree+0x470/0x480
[227720.765008]  __free_slab+0xac/0x1a8
[227720.765009]  discard_slab+0x70/0xb0
[227720.765010]  __slab_free+0x394/0x3e0
[227720.765011]  kfree+0x470/0x480
[227720.765012]  __free_slab+0xac/0x1a8
[227720.765013]  discard_slab+0x70/0xb0
[227720.765014]  __slab_free+0x394/0x3e0
[227720.765015]  kfree+0x470/0x480
[227720.765016]  __free_slab+0xac/0x1a8
[227720.765017]  discard_slab+0x70/0xb0
[227720.765018]  __slab_free+0x394/0x3e0
[227720.765020]  kfree+0x470/0x480
[227720.765020]  __free_slab+0xac/0x1a8
[227720.765022]  discard_slab+0x70/0xb0
[227720.765023]  __slab_free+0x394/0x3e0
[227720.765024]  kfree+0x470/0x480
[227720.765025]  __free_slab+0xac/0x1a8
[227720.765026]  discard_slab+0x70/0xb0
[227720.765027]  __slab_free+0x394/0x3e0
[227720.765028]  kfree+0x470/0x480
[227720.765029]  __free_slab+0xac/0x1a8
[227720.765030]  discard_slab+0x70/0xb0
[227720.765031]  __slab_free+0x394/0x3e0
[227720.765032]  kfree+0x470/0x480
[227720.765033]  __free_slab+0xac/0x1a8
[227720.765034]  discard_slab+0x70/0xb0
[227720.765036]  __slab_free+0x394/0x3e0
[227720.765037]  kfree+0x470/0x480
[227720.765038]  __free_slab+0xac/0x1a8
[227720.765039]  discard_slab+0x70/0xb0
[227720.765040]  __slab_free+0x394/0x3e0
[227720.765041]  kfree+0x470/0x480
[227720.765042]  __free_slab+0xac/0x1a8
[227720.765043]  discard_slab+0x70/0xb0
[227720.765044]  __slab_free+0x394/0x3e0
[227720.765045]  kfree+0x470/0x480
[227720.765046]  __free_slab+0xac/0x1a8
[227720.765047]  discard_slab+0x70/0xb0
[227720.765048]  __slab_free+0x394/0x3e0
[227720.765049]  kfree+0x470/0x480
[227720.765051]  __free_slab+0xac/0x1a8
[227720.765051]  discard_slab+0x70/0xb0
[227720.765053]  __slab_free+0x394/0x3e0
[227720.765054]  kfree+0x470/0x480
[227720.765055]  __free_slab+0xac/0x1a8
[227720.765056]  discard_slab+0x70/0xb0
[227720.765057]  __slab_free+0x394/0x3e0
[227720.765058]  kfree+0x470/0x480
[227720.765059]  __free_slab+0xac/0x1a8
[227720.765060]  discard_slab+0x70/0xb0
[227720.765060]  __slab_free+0x394/0x3e0
[227720.765061]  kfree+0x470/0x480
[227720.765062]  __free_slab+0xac/0x1a8
[227720.765063]  discard_slab+0x70/0xb0
[227720.765064]  __slab_free+0x394/0x3e0
[227720.765065]  kfree+0x470/0x480
[227720.765066]  __free_slab+0xac/0x1a8
[227720.765067]  discard_slab+0x70/0xb0
[227720.765068]  __slab_free+0x394/0x3e0
[227720.765069]  kfree+0x470/0x480
[227720.765070]  __free_slab+0xac/0x1a8
[227720.765071]  discard_slab+0x70/0xb0
[227720.765072]  __slab_free+0x394/0x3e0
[227720.765073]  kfree+0x470/0x480
[227720.765074]  __free_slab+0xac/0x1a8
[227720.765076]  discard_slab+0x70/0xb0
[227720.765077]  __slab_free+0x394/0x3e0
[227720.765078]  kfree+0x470/0x480
[227720.765079]  __free_slab+0xac/0x1a8
[227720.765080]  discard_slab+0x70/0xb0
[227720.765081]  __slab_free+0x394/0x3e0
[227720.765082]  kfree+0x470/0x480
[227720.765083]  __free_slab+0xac/0x1a8
[227720.765084]  discard_slab+0x70/0xb0
[227720.765085]  __slab_free+0x394/0x3e0
[227720.765086]  kfree+0x470/0x480
[227720.765087]  __free_slab+0xac/0x1a8
[227720.765088]  discard_slab+0x70/0xb0
[227720.765089]  __slab_free+0x394/0x3e0
[227720.765090]  kfree+0x470/0x480
[227720.765092]  __free_slab+0xac/0x1a8
[227720.765093]  discard_slab+0x70/0xb0
[227720.765094]  __slab_free+0x394/0x3e0
[227720.765095]  kfree+0x470/0x480
[227720.765096]  __free_slab+0xac/0x1a8
[227720.765097]  discard_slab+0x70/0xb0
[227720.765098]  __slab_free+0x394/0x3e0
[227720.765099]  kfree+0x470/0x480
[227720.765100]  __free_slab+0xac/0x1a8
[227720.765101]  discard_slab+0x70/0xb0
[227720.765102]  __slab_free+0x394/0x3e0
[227720.765103]  kfree+0x470/0x480
[227720.765104]  __free_slab+0xac/0x1a8
[227720.765105]  discard_slab+0x70/0xb0
[227720.765107]  __slab_free+0x394/0x3e0
[227720.765107]  kfree+0x470/0x480
[227720.765108]  __free_slab+0xac/0x1a8
[227720.765109]  discard_slab+0x70/0xb0
[227720.765110]  __slab_free+0x394/0x3e0
[227720.765111]  kfree+0x470/0x480
[227720.765112]  __free_slab+0xac/0x1a8
[227720.765114]  discard_slab+0x70/0xb0
[227720.765114]  __slab_free+0x394/0x3e0
[227720.765115]  kfree+0x470/0x480
[227720.765116]  __free_slab+0xac/0x1a8
[227720.765118]  discard_slab+0x70/0xb0
[227720.765119]  __slab_free+0x394/0x3e0
[227720.765120]  kfree+0x470/0x480
[227720.765121]  __free_slab+0xac/0x1a8
[227720.765122]  discard_slab+0x70/0xb0
[227720.765123]  __slab_free+0x394/0x3e0
[227720.765124]  kfree+0x470/0x480
[227720.765125]  __free_slab+0xac/0x1a8
[227720.765127]  discard_slab+0x70/0xb0
[227720.765128]  __slab_free+0x394/0x3e0
[227720.765129]  kfree+0x470/0x480
[227720.765130]  __free_slab+0xac/0x1a8
[227720.765131]  discard_slab+0x70/0xb0
[227720.765132]  __slab_free+0x394/0x3e0
[227720.765133]  kfree+0x470/0x480
[227720.765134]  __free_slab+0xac/0x1a8
[227720.765135]  discard_slab+0x70/0xb0
[227720.765136]  __slab_free+0x394/0x3e0
[227720.765137]  kfree+0x470/0x480
[227720.765139]  __free_slab+0xac/0x1a8
[227720.765139]  discard_slab+0x70/0xb0
[227720.765141]  __slab_free+0x394/0x3e0
[227720.765142]  kfree+0x470/0x480
[227720.765143]  __free_slab+0xac/0x1a8
[227720.765144]  discard_slab+0x70/0xb0
[227720.765145]  unfreeze_partials.isra.93+0x1d0/0x250
[227720.765146]  put_cpu_partial+0x18c/0x230
[227720.765147]  __slab_free+0x2cc/0x3e0
[227720.765148]  kfree+0x470/0x480
[227720.765149]  __d_free_external+0x20/0x40
[227720.765151]  rcu_core+0x25c/0x820
[227720.765151]  rcu_core_si+0x10/0x20
[227720.765153]  efi_header_end+0x160/0x41c
[227720.765153]  run_ksoftirqd+0x58/0x70
[227720.765155]  smpboot_thread_fn+0x200/0x238
[227720.765156]  kthread+0x140/0x150
[227720.765156]  ret_from_fork+0x10/0x34
[227720.765177] SMP: stopping secondary CPUs
[227720.765178] Kernel Offset: disabled
[227720.765179] CPU features: 0x0240022,6100200c
[227720.765180] Memory Limit: none

 

[New boot sequence starts...]

 

 

Kernel 5.10.21, ondemand governor, dynamic stock frequencies (settings recommended in this and other stability threads):

Spoiler

[96770.340605] Insufficient stack space to handle exception!
[96770.340607] ESR: 0x96000047 -- DABT (current EL)
[96770.340609] FAR: 0xffff800011e4ffb0
[96770.340610] Task stack:     [0xffff800011e50000..0xffff800011e54000]
[96770.340612] IRQ stack:      [0xffff800011c08000..0xffff800011c0c000]
[96770.340613] Overflow stack: [0xffff0000f77922b0..0xffff0000f77932b0]
[96770.340615] CPU: 4 PID: 33 Comm: ksoftirqd/4 Tainted: P         C OE     5.10.21-rockchip64 #21.02.3
[96770.340616] Hardware name: Helios64 (DT)
[96770.340617] pstate: 80000085 (Nzcv daIf -PAN -UAO -TCO BTYPE=--)
[96770.340619] pc : __slab_free+0x4/0x3e0
[96770.340620] lr : kfree+0x370/0x398
[96770.340621] sp : ffff800011e50070
[96770.340622] x29: ffff800011e50070 x28: 0000000000100001
[96770.340626] x27: fffffe000129dfc0 x26: ffff00005277f100
[96770.340629] x25: 0080000000000000 x24: ffff000040001700
[96770.340631] x23: 00000000019eed04 x22: ffff00009970af80
[96770.340634] x21: ffff8000102c18c4 x20: ffff8000118b9948
[96770.340636] x19: fffffe000245c280 x18: 0000000000000000
[96770.340639] x17: 00000ff4399e0f64 x16: 00004fda04bdf8a0
[96770.340642] x15: 000005d80eb36a80 x14: 00000000000001a6
[96770.340644] x13: 00000000f0000000 x12: 0000000000000001
[96770.340647] x11: 00000000f0000000 x10: 0000000000000001
[96770.340650] x9 : fffffe00031e4e88 x8 : 0000000000000001
[96770.340652] x7 : ffff000040000ba0 x6 : ffff000000000000
[96770.340655] x5 : ffff8000102c18c4 x4 : 0000000000000001
[96770.340657] x3 : ffff00009970af80 x2 : ffff00009970af80
[96770.340660] x1 : fffffe000245c280 x0 : ffff000040001700
[96770.340663] Kernel panic - not syncing: kernel stack overflow
[96770.340664] CPU: 4 PID: 33 Comm: ksoftirqd/4 Tainted: P         C OE     5.10.21-rockchip64 #21.02.3
[96770.340666] Hardware name: Helios64 (DT)
[96770.340667] Call trace:
[96770.340668]  dump_backtrace+0x0/0x200
[96770.340669]  show_stack+0x18/0x68
[96770.340670]  dump_stack+0xcc/0x124
[96770.340671]  panic+0x174/0x374
[96770.340672]  nmi_panic+0x64/0x98
[96770.340673]  handle_bad_stack+0x124/0x150
[96770.340675]  __bad_stack+0x90/0x94
[96770.340676]  __slab_free+0x4/0x3e0
[96770.340677]  __free_slab+0xac/0x1a8
[96770.340678]  discard_slab+0x70/0xb0
[96770.340679]  __slab_free+0x398/0x3e0
[96770.340680]  kfree+0x370/0x398
[96770.340681]  __free_slab+0xac/0x1a8
[96770.340682]  discard_slab+0x70/0xb0
[96770.340683]  __slab_free+0x398/0x3e0
[96770.340684]  kfree+0x370/0x398
[96770.340685]  __free_slab+0xac/0x1a8
[96770.340686]  discard_slab+0x70/0xb0
[96770.340687]  __slab_free+0x398/0x3e0
[96770.340688]  kfree+0x370/0x398
[96770.340689]  __free_slab+0xac/0x1a8
[96770.340690]  discard_slab+0x70/0xb0
[96770.340691]  __slab_free+0x398/0x3e0
[96770.340692]  kfree+0x370/0x398
[96770.340693]  __free_slab+0xac/0x1a8
[96770.340694]  discard_slab+0x70/0xb0
[96770.340694]  __slab_free+0x398/0x3e0
[96770.340696]  kfree+0x370/0x398
[96770.340697]  __free_slab+0xac/0x1a8
[96770.340697]  discard_slab+0x70/0xb0
[96770.340698]  __slab_free+0x398/0x3e0
[96770.340700]  kfree+0x370/0x398
[96770.340701]  __free_slab+0xac/0x1a8
[96770.340702]  discard_slab+0x70/0xb0
[96770.340703]  __slab_free+0x398/0x3e0
[96770.340704]  kfree+0x370/0x398
[96770.340705]  __free_slab+0xac/0x1a8
[96770.340706]  discard_slab+0x70/0xb0
[96770.340708]  __slab_free+0x398/0x3e0
[96770.340708]  kfree+0x370/0x398
[96770.340710]  __free_slab+0xac/0x1a8
[96770.340711]  discard_slab+0x70/0xb0
[96770.340712]  __slab_free+0x398/0x3e0
[96770.340713]  kfree+0x370/0x398
[96770.340714]  __free_slab+0xac/0x1a8
[96770.340715]  discard_slab+0x70/0xb0
[96770.340716]  __slab_free+0x398/0x3e0
[96770.340717]  kfree+0x370/0x398
[96770.340718]  __free_slab+0xac/0x1a8
[96770.340719]  discard_slab+0x70/0xb0
[96770.340720]  __slab_free+0x398/0x3e0
[96770.340721]  kfree+0x370/0x398
[96770.340722]  __free_slab+0xac/0x1a8
[96770.340724]  discard_slab+0x70/0xb0
[96770.340725]  __slab_free+0x398/0x3e0
[96770.340726]  kfree+0x370/0x398
[96770.340727]  __free_slab+0xac/0x1a8
[96770.340728]  discard_slab+0x70/0xb0
[96770.340729]  __slab_free+0x398/0x3e0
[96770.340730]  kfree+0x370/0x398
[96770.340731]  __free_slab+0xac/0x1a8
[96770.340732]  discard_slab+0x70/0xb0
[96770.340733]  __slab_free+0x398/0x3e0
[96770.340734]  kfree+0x370/0x398
[96770.340735]  __free_slab+0xac/0x1a8
[96770.340736]  discard_slab+0x70/0xb0
[96770.340737]  __slab_free+0x398/0x3e0
[96770.340738]  kfree+0x370/0x398
[96770.340739]  __free_slab+0xac/0x1a8
[96770.340740]  discard_slab+0x70/0xb0
[96770.340741]  __slab_free+0x398/0x3e0
[96770.340743]  kfree+0x370/0x398
[96770.340743]  __free_slab+0xac/0x1a8
[96770.340744]  discard_slab+0x70/0xb0
[96770.340746]  __slab_free+0x398/0x3e0
[96770.340747]  kfree+0x370/0x398
[96770.340748]  __free_slab+0xac/0x1a8
[96770.340748]  discard_slab+0x70/0xb0
[96770.340749]  __slab_free+0x398/0x3e0
[96770.340750]  kfree+0x370/0x398
[96770.340751]  __free_slab+0xac/0x1a8
[96770.340753]  discard_slab+0x70/0xb0
[96770.340753]  __slab_free+0x398/0x3e0
[96770.340755]  kfree+0x370/0x398
[96770.340755]  __free_slab+0xac/0x1a8
[96770.340757]  discard_slab+0x70/0xb0
[96770.340757]  __slab_free+0x398/0x3e0
[96770.340759]  kfree+0x370/0x398
[96770.340760]  __free_slab+0xac/0x1a8
[96770.340760]  discard_slab+0x70/0xb0
[96770.340762]  __slab_free+0x398/0x3e0
[96770.340763]  kfree+0x370/0x398
[96770.340764]  __free_slab+0xac/0x1a8
[96770.340765]  discard_slab+0x70/0xb0
[96770.340766]  __slab_free+0x398/0x3e0
[96770.340767]  kfree+0x370/0x398
[96770.340768]  __free_slab+0xac/0x1a8
[96770.340769]  discard_slab+0x70/0xb0
[96770.340770]  __slab_free+0x398/0x3e0
[96770.340771]  kfree+0x370/0x398
[96770.340772]  __free_slab+0xac/0x1a8
[96770.340773]  discard_slab+0x70/0xb0
[96770.340774]  __slab_free+0x398/0x3e0
[96770.340775]  kfree+0x370/0x398
[96770.340776]  __free_slab+0xac/0x1a8
[96770.340777]  discard_slab+0x70/0xb0
[96770.340778]  __slab_free+0x398/0x3e0
[96770.340779]  kfree+0x370/0x398
[96770.340780]  __free_slab+0xac/0x1a8
[96770.340781]  discard_slab+0x70/0xb0
[96770.340782]  __slab_free+0x398/0x3e0
[96770.340783]  kfree+0x370/0x398
[96770.340784]  __free_slab+0xac/0x1a8
[96770.340785]  discard_slab+0x70/0xb0
[96770.340786]  __slab_free+0x398/0x3e0
[96770.340787]  kfree+0x370/0x398
[96770.340788]  __free_slab+0xac/0x1a8
[96770.340789]  discard_slab+0x70/0xb0
[96770.340790]  __slab_free+0x398/0x3e0
[96770.340791]  kfree+0x370/0x398
[96770.340792]  __free_slab+0xac/0x1a8
[96770.340792]  discard_slab+0x70/0xb0
[96770.340793]  __slab_free+0x398/0x3e0
[96770.340795]  kfree+0x370/0x398
[96770.340795]  __free_slab+0xac/0x1a8
[96770.340797]  discard_slab+0x70/0xb0
[96770.340798]  __slab_free+0x398/0x3e0
[96770.340799]  kfree+0x370/0x398
[96770.340800]  __free_slab+0xac/0x1a8
[96770.340801]  discard_slab+0x70/0xb0
[96770.340802]  __slab_free+0x398/0x3e0
[96770.340803]  kfree+0x370/0x398
[96770.340804]  __free_slab+0xac/0x1a8
[96770.340806]  discard_slab+0x70/0xb0
[96770.340807]  __slab_free+0x398/0x3e0
[96770.340808]  kfree+0x370/0x398
[96770.340809]  __free_slab+0xac/0x1a8
[96770.340810]  discard_slab+0x70/0xb0
[96770.340811]  __slab_free+0x398/0x3e0
[96770.340813]  kfree+0x370/0x398
[96770.340814]  __free_slab+0xac/0x1a8
[96770.340815]  discard_slab+0x70/0xb0
[96770.340816]  __slab_free+0x398/0x3e0
[96770.340817]  kfree+0x370/0x398
[96770.340818]  __free_slab+0xac/0x1a8
[96770.340819]  discard_slab+0x70/0xb0
[96770.340820]  __slab_free+0x398/0x3e0
[96770.340821]  kfree+0x370/0x398
[96770.340822]  __free_slab+0xac/0x1a8
[96770.340823]  discard_slab+0x70/0xb0
[96770.340824]  unfreeze_partials.isra.93+0x1d0/0x250
[96770.340826]  put_cpu_partial+0x18c/0x228
[96770.340827]  __slab_free+0x2d0/0x3e0
[96770.340827]  kfree+0x370/0x398
[96770.340829]  __d_free_external+0x20/0x40
[96770.340830]  rcu_core+0x25c/0x820
[96770.340831]  rcu_core_si+0x10/0x20
[96770.340832]  efi_header_end+0x160/0x41c
[96770.340833]  run_ksoftirqd+0x58/0x70
[96770.340834]  smpboot_thread_fn+0x200/0x238
[96770.340835]  kthread+0x140/0x150
[96770.340836]  ret_from_fork+0x10/0x34
[96770.340855] SMP: stopping secondary CPUs
[96770.340857] Kernel Offset: disabled
[96770.340858] CPU features: 0x0240022,6100200c
[96770.340859] Memory Limit: none

 

[New boot sequence starts...]

 

Happy to help with more information or tests.

Link to comment
Share on other sites

@aprayogaAs I mentioned, the crash happens when some Rsync jobs are run. I'll try to describe my setup as best as possible.

My Helios64 is running LK 5.10.21, OMV 5.6.2-1, and has a RAID-Z2 array + a simple EXT4 partition.

The drives are:

- Bay 1: 500GB Crucial MX500.

    - partition 1: 16GB swap partition, created while troubleshooting the crashes (to make sure it was not due to lack of memory). The crashes happen both with and without it.
    - partition 2: 400GB EXT4 partition, used to serve VM/container images via NFS.
    - partition 3: 50GB write cache for the ZFS pool.

- Bays 2 through 5: 4x 4TB Seagate IronWolf (ST4000VN008-2DR1) in RAID-Z2, ashift=12, ZSTD compression (although most of the data is compressed with LZ4).

 

My remote NAS is a Helios4, also running OMV, with the Rsync plugin in server mode. On the Helios64 I have 4 Rsync jobs for 4 different shared folders, 3 sending and 1 receiving. They all are scheduled to run on Sundays at 3 AM. The reason for starting all 4 jobs at the same time is to maximise the bandwidth usage, since the backup happens over the internet (Wireguard running on the router, not on the H64), and the latency from where I am to where the Helios4 is (Brazil) is quite high. The transfers are incremental, so not a lot of data gets sent with each run, but I guess Rsync still needs to read all the files on the source to see what has changed, causing the high IO load.

The data on these synced folders is very diverse in size and number of files. In total it's about 1.7TB.

 

Another way I can reproduce the crash is by running a ZFS scrub.

 

And also as I mentioned in the previous post, in one occasion, the NAS crashed when generating simultaneous IO from multiple VMs, which sit on an EXT4 partition on the SSD, so it doesn't seem limited to the ZFS pool.

Link to comment
Share on other sites

@ShadowDance I think you're definitely on to something here! I just ran the Rsync jobs after setting the scheduler to none for all disks and it completed successfully without crashing.

I'll keep an eye on it and report any other crashes, but for now thank you very much!

 

Update: It's now a little over 3 hours into a ZFS scrub, and I restarted all my VMs simultaneously *while* doing the scrub and it has not rebooted nor complained about anything on dmesg. This is very promising!

 

Update 2: Scrub finished without problems!

Link to comment
Share on other sites

Thanks for the tip @ShadowDance, in my case I haven't had any crashes while using ZFS.

But I've updated my systemd scripts to take the ZFS scheduler into account.

What I did notice was that my data-sets on raid-z1 pool seems to be a bit more snappy (it might be a perception issue on my part).

 

[Unit]
Description=HDD-Idle service
After=basic.target multi-user.target

[Service]
Type=oneshot
## Set Power Management parameters to all HDD
ExecStart=/bin/sh -c '/usr/sbin/hdparm -qB255S120 /dev/sd[a-e]'
## Allows ZFS to use it's own scheduler for rotating disks.
ExecStart=/bin/sh -c 'echo none | tee /sys/block/sd[a-e]/queue/scheduler'

[Install]
WantedBy=multi-user.target

 

Link to comment
Share on other sites

New crash, but different behaviour and conditions this time. :(

It had a kernel oops at around 5am this morning, resulting in a graceful-ish reboot. From my limited understanding, the error seems to be related with swap (the line "Comm: swapper"), so I have disabled the swap partition for now and will continue to monitor.

The major difference this time is that there was nothing happening at that time, it was just sitting idle.

 

Spoiler

[20091.475339] Internal error: Oops: 96000004 [#1] PREEMPT SMP
[20091.475837] Modules linked in: softdog governor_performance quota_v2 quota_tree snd_soc_hdmi_codec r8152 gpio_charger leds_pwm pwm_fan hantro_vpu(C) rockchip_vdec(C) panfrost v4l2_h264 gpu_sched snd_soc_rockchip_i2s snd_soc_core snd_pcm_dmaengine videobuf2_dma_contig rockchip_rga snd_pcm videobuf2_vmalloc v4l2_mem2mem videobuf2_dma_sg videobuf2_memops videobuf2_v4l2 videobuf2_common snd_timer videodev snd fusb302 soundcore mc tcpm typec rockchipdrm dw_mipi_dsi dw_hdmi analogix_dp sg drm_kms_helper cec rc_core drm drm_panel_orientation_quirks gpio_beeper cpufreq_dt zfs(POE) nfsd zunicode(POE) zzstd(OE) zlua(OE) auth_rpcgss nfs_acl lockd grace zcommon(POE) znvpair(POE) zavl(POE) icp(POE) spl(OE) ledtrig_netdev lm75 sunrpc ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear md_mod uas realtek dwmac_rk stmmac_platform stmmac pcs_xpcs adc_keys
[20091.483031] CPU: 4 PID: 0 Comm: swapper/4 Tainted: P         C OE     5.10.21-rockchip64 #21.02.3
[20091.483807] Hardware name: Helios64 (DT)
[20091.484154] pstate: 00000085 (nzcv daIf -PAN -UAO -TCO BTYPE=--)
[20091.484690] pc : irq_work_tick+0x30/0xd0
[20091.485037] lr : irq_work_tick+0x20/0xd0
[20091.485382] sp : ffff800011c0bdb0
[20091.485674] x29: ffff800011c0bdb0 x28: 00001245cd652623
[20091.486143] x27: ffff0000f779b6c0 x26: 0000000000000002
[20091.486611] x25: 0000000000000080 x24: ffff80001156a000
[20091.487079] x23: ffff8000118ba208 x22: ffff80001014da70
[20091.487547] x21: ffff8000118b99c8 x20: ffff80001157c9f8
[20091.488014] x19: ffff80001157c9f0 x18: 0000000000000000
[20091.488483] x17: 0000000000000000 x16: 0000000000000000
[20091.488950] x15: 0000000bfedea982 x14: 00000000000000f9
[20091.489418] x13: 00000001004b7f26 x12: 00000000001513fd
[20091.489886] x11: ffff8000118b7000 x10: ffff80001194ef28
[20091.490354] x9 : ffff80001194ef20 x8 : ffff800011b72320
[20091.490821] x7 : ffff800011952000 x6 : 00000070660d244a
[20091.491289] x5 : b51e853d6cb16636 x4 : ffff8000e6228000
[20091.491757] x3 : 0000000000010001 x2 : ffff80001156a000
[20091.492225] x1 : ffff8000112a16a8 x0 : ffff8000e6228000
[20091.492693] Call trace:
[20091.492913]  irq_work_tick+0x30/0xd0
[20091.493231]  update_process_times+0x88/0xa0
[20091.493600]  tick_sched_handle.isra.19+0x40/0x58
[20091.494006]  tick_sched_timer+0x58/0xb0
[20091.494346]  __hrtimer_run_queues+0x104/0x388
[20091.494730]  hrtimer_interrupt+0xf4/0x250
[20091.495086]  arch_timer_handler_phys+0x30/0x40
[20091.495478]  handle_percpu_devid_irq+0xa0/0x298
[20091.495876]  generic_handle_irq+0x30/0x48
[20091.496229]  __handle_domain_irq+0x94/0x108
[20091.496600]  gic_handle_irq+0xc0/0x140
[20091.496932]  el1_irq+0xc0/0x180
[20091.497212]  arch_cpu_idle+0x18/0x28
[20091.497528]  default_idle_call+0x44/0x1bc
[20091.497884]  do_idle+0x204/0x278
[20091.498169]  cpu_startup_entry+0x28/0x60
[20091.498517]  secondary_start_kernel+0x170/0x180
[20091.498917] Code: f0009dd3 9127c273 f8605aa0 91002274 (f8606a80)
[20091.499458] ---[ end trace edea193d828b5483 ]---
[20091.499865] Kernel panic - not syncing: Oops: Fatal exception in interrupt
[20091.500467] SMP: stopping secondary CPUs
[20091.500818] Kernel Offset: disabled
[20091.501127] CPU features: 0x0240022,6100200c
[20091.501502] Memory Limit: none
[20091.501781] Rebooting in 90 seconds..

 

Link to comment
Share on other sites

I have tried to changed cpu governor to : ondemand

root@helios64:/# cat /etc/default/cpufrequtils
ENABLE="true"
GOVERNOR=ondemand
MAX_SPEED=1800000
MIN_SPEED=408000

 

but after 5min, my server got rebooted :( .

Previously I used :

cat /etc/default/cpufrequtils

ENABLE="true"
GOVERNOR=performance
MAX_SPEED=1416000
MIN_SPEED=1416000

 

for last 3 kernel updates without any issue. Uptime's was mostly about 2-3 weeks without kernel panic or reboots

 

Unfortunately I don't have nothing on my logs. I have changed verbose mode to 7 ,and Im logging to the file my console. I will update when I get error.

I have  ZFS mirror ( LUKS on the top) + ZFS cache 

Also I din't changed the disks scheduler. I have bfq , but it was worked since last upgrade without any problem. 

 

currently I ran zfs scrub ... we will see. 

 

[update 1] : 

root@helios64:/# uptime
 11:02:40 up  2:28,  1 user,  load average: 4.80, 4.81, 4.60]

 

scrub is still in progress ,but I got few read errors. I'm not sure if it problem with HDD connector, Hdd scheduler , or cpu freq....

I didn't have it before.

 

 

Quote

root@helios64:/# zpool status -v data
  pool: data
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub in progress since Tue Mar 23 08:49:19 2021
        922G scanned at 126M/s, 640G issued at 87.5M/s, 2.11T total
        2.75M repaired, 29.62% done, 04:56:36 to go
config:

        NAME           STATE     READ WRITE CKSUM
        data           DEGRADED     0     0     0
          mirror-0     DEGRADED     0     0     0
            sda-crypt  ONLINE       0     0     0
            sdb-crypt  DEGRADED     3     0    22  too many errors  (repairing)
        cache
          sdc2         ONLINE       0     0     0

 

 

Quote

[ 8255.882611] ata2.00: irq_stat 0x08000000
[ 8255.882960] ata2: SError: { Proto }
[ 8255.883273] ata2.00: failed command: READ FPDMA QUEUED
[ 8255.883732] ata2.00: cmd 60/00:10:18:4d:9d/08:00:15:00:00/40 tag 2 ncq dma 1048576 in
[ 8255.883732]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
[ 8255.885101] ata2.00: status: { DRDY }
[ 8255.885426] ata2.00: failed command: READ FPDMA QUEUED
[ 8255.885885] ata2.00: cmd 60/00:18:90:5b:9d/08:00:15:00:00/40 tag 3 ncq dma 1048576 in
[ 8255.885885]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
[ 8255.887294] ata2.00: status: { DRDY }
[ 8255.887623] ata2.00: failed command: READ FPDMA QUEUED
[ 8255.888083] ata2.00: cmd 60/10:98:18:55:9d/06:00:15:00:00/40 tag 19 ncq dma 794624 in
[ 8255.888083]          res 40/00:10:18:4d:9d/00:00:15:00:00/40 Emask 0x2 (HSM violation)
[ 8255.889450] ata2.00: status: { DRDY }
[ 8255.889783] ata2: hard resetting link
[ 8256.365961] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[ 8256.367771] ata2.00: configured for UDMA/133
[ 8256.368540] sd 1:0:0:0: [sdb] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
[ 8256.369343] sd 1:0:0:0: [sdb] tag#2 Sense Key : 0x5 [current]
[ 8256.369858] sd 1:0:0:0: [sdb] tag#2 ASC=0x21 ASCQ=0x4
[ 8256.370354] sd 1:0:0:0: [sdb] tag#2 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 4d 18 00 00 08 00 00 00
[ 8256.371158] blk_update_request: I/O error, dev sdb, sector 362630424 op 0x0:(READ) flags 0x700 phys_seg 16 prio class 0
[ 8256.372125] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185649999872 size=1048576 flags=40080cb0
[ 8256.373117] sd 1:0:0:0: [sdb] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
[ 8256.373964] sd 1:0:0:0: [sdb] tag#3 Sense Key : 0x5 [current]
[ 8256.374480] sd 1:0:0:0: [sdb] tag#3 ASC=0x21 ASCQ=0x4
[ 8256.374940] sd 1:0:0:0: [sdb] tag#3 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 5b 90 00 00 08 00 00 00
[ 8256.375742] blk_update_request: I/O error, dev sdb, sector 362634128 op 0x0:(READ) flags 0x700 phys_seg 16 prio class 0
[ 8256.376704] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185651896320 size=1048576 flags=40080cb0
[ 8256.377724] sd 1:0:0:0: [sdb] tag#19 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
[ 8256.378568] sd 1:0:0:0: [sdb] tag#19 Sense Key : 0x5 [current]
[ 8256.379096] sd 1:0:0:0: [sdb] tag#19 ASC=0x21 ASCQ=0x4
[ 8256.379561] sd 1:0:0:0: [sdb] tag#19 CDB: opcode=0x88 88 00 00 00 00 00 15 9d 55 18 00 00 06 10 00 00
[ 8256.380371] blk_update_request: I/O error, dev sdb, sector 362632472 op 0x0:(READ) flags 0x700 phys_seg 13 prio class 0
[ 8256.381336] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=185651048448 size=794624 flags=40080cb0
[ 8256.382389] ata2: EH complete
 

 

[update 2]:

Server has been rebooted ..... 

last 'tasks' what i remember was generating thumbnails by nextcloud .

Unfortunately nothing in the console (it was connected all the time ) with:

root@helios64:~# cat /boot/armbianEnv.txt 
verbosity=7
console=serial
extraargs=earlyprintk ignore_loglevel

 

failed command: READ FPDMA QUEUED , errors comes from zfs scrum few hours ealier

 

 

Quote

[38857.198752] ata2.00: exception Emask 0x2 SAct 0x80600000 SErr 0x400 action 0x6
[38857.199398] ata2.00: irq_stat 0x08000000
[38857.199747] ata2: SError: { Proto }
[38857.200059] ata2.00: failed command: READ FPDMA QUEUED
[38857.200520] ata2.00: cmd 60/00:a8:d8:4e:5a/01:00:d9:00:00/40 tag 21 ncq dma 131072 in
[38857.200520]          res 40/00:f8:d8:4b:5a/00:00:d9:00:00/40 Emask 0x2 (HSM violation)
[38857.201887] ata2.00: status: { DRDY }
[38857.202212] ata2.00: failed command: READ FPDMA QUEUED
[38857.202671] ata2.00: cmd 60/00:b0:d8:4f:5a/01:00:d9:00:00/40 tag 22 ncq dma 131072 in
[38857.202671]          res 40/00:f8:d8:4b:5a/00:00:d9:00:00/40 Emask 0x2 (HSM violation)
[38857.204083] ata2.00: status: { DRDY }
[38857.204411] ata2.00: failed command: READ FPDMA QUEUED
[38857.204872] ata2.00: cmd 60/00:f8:d8:4b:5a/03:00:d9:00:00/40 tag 31 ncq dma 393216 in
[38857.204872]          res 40/00:f8:d8:4b:5a/00:00:d9:00:00/40 Emask 0x2 (HSM violation)
[38857.206239] ata2.00: status: { DRDY }
[38857.206573] ata2: hard resetting link
[38857.682727] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[38857.684501] ata2.00: configured for UDMA/133
[38857.685009] sd 1:0:0:0: [sdb] tag#21 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
[38857.685819] sd 1:0:0:0: [sdb] tag#21 Sense Key : 0x5 [current] 
[38857.686340] sd 1:0:0:0: [sdb] tag#21 ASC=0x21 ASCQ=0x4 
[38857.686847] sd 1:0:0:0: [sdb] tag#21 CDB: opcode=0x88 88 00 00 00 00 00 d9 5a 4e d8 00 00 01 00 00 00
[38857.687660] blk_update_request: I/O error, dev sdb, sector 3646574296 op 0x0:(READ) flags 0x700 phys_seg 4 prio class 0
[38857.688628] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=1867029262336 size=131072 flags=1808b0
[38857.689672] sd 1:0:0:0: [sdb] tag#22 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
[38857.690487] sd 1:0:0:0: [sdb] tag#22 Sense Key : 0x5 [current] 
[38857.691051] sd 1:0:0:0: [sdb] tag#22 ASC=0x21 ASCQ=0x4 
[38857.691518] sd 1:0:0:0: [sdb] tag#22 CDB: opcode=0x88 88 00 00 00 00 00 d9 5a 4f d8 00 00 01 00 00 00
[38857.692329] blk_update_request: I/O error, dev sdb, sector 3646574552 op 0x0:(READ) flags 0x700 phys_seg 4 prio class 0
[38857.693288] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=1867029393408 size=131072 flags=1808b0
[38857.694284] sd 1:0:0:0: [sdb] tag#31 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 cmd_age=0s
[38857.695129] sd 1:0:0:0: [sdb] tag#31 Sense Key : 0x5 [current] 
[38857.695652] sd 1:0:0:0: [sdb] tag#31 ASC=0x21 ASCQ=0x4 
[38857.696116] sd 1:0:0:0: [sdb] tag#31 CDB: opcode=0x88 88 00 00 00 00 00 d9 5a 4b d8 00 00 03 00 00 00
[38857.696927] blk_update_request: I/O error, dev sdb, sector 3646573528 op 0x0:(READ) flags 0x700 phys_seg 12 prio class 0
[38857.697892] zio pool=data vdev=/dev/mapper/sdb-crypt error=5 type=1 offset=1867028869120 size=393216 flags=40080cb0
[38857.698965] ata2: EH complete
[38866.899025] [UFW BLOCK] IN=eth0 OUT= MAC=64:62:66:d0:01:fa:78:4f:43:7a:28:ae:08:00 SRC=192.168.18.177 DST=192.168.18.10 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=59353 DPT=443 WINDOW=2048 RES=0x00 ACK FIN URGP=0 
[38873.508619] [UFW BLOCK] IN=eth0 OUT= MAC=64:62:66:d0:01:fa:78:4f:43:7a:28:ae:08:00 SRC=192.168.18.177 DST=192.168.18.10 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=59353 DPT=443 WINDOW=2048 RES=0x00 ACK RST URGP=0 
DDR Version 1.24 20191016
In
soft reset
SRX
channel 0
CS = 0
MR0=0x18
MR4=0x1
MR5=0x1
MR8=0x10
MR12=0x72
MR14=0x72
MR18=0x0
MR19=0x0
MR24=0x8
MR25=0x0
channel 1
CS = 0
MR0=0x18
MR4=0x1
MR5=0x1
MR8=0x10
MR12=0x72
MR14=0x72
MR18=0x0
MR19=0x0
MR24=0x8
MR25=0x0
channel 0 training pass!
channel 1 training pass!
change freq to 416MHz 0,1
Channel 0: LPDDR4,416MHz
Bus Width=32 Col=10 Bank=8 Row=16 CS=1 Die Bus-Width=16 Size=2048MB
Channel 1: LPDDR4,416MHz
Bus Width=32 Col=10 Bank=8 Row=16 CS=1 Die Bus-Width=16 Size=2048MB
256B stride
channel 0
CS = 0
MR0=0x18
MR4=0x82
MR5=0x1
MR8=0x10
MR12=0x72
MR14=0x72
MR18=0x0
MR19=0x0
MR24=0x8
MR25=0x0
channel 1
CS = 0
MR0=0x18
MR4=0x1
MR5=0x1
MR8=0x10
MR12=0x72
MR14=0x72
MR18=0x0
MR19=0x0
MR24=0x8
MR25=0x0
channel 0 training pass!
channel 1 training pass!
channel 0, cs 0, advanced training done
channel 1, cs 0, advanced training done
change freq to 856MHz 1,0
ch 0 ddrconfig = 0x101, ddrsize = 0x40
ch 1 ddrconfig = 0x101, ddrsize = 0x40
pmugrf_os_reg[2] = 0x32C1F2C1, stride = 0xD
ddr_set_rate to 328MHZ
ddr_set_rate to 666MHZ
ddr_set_rate to 928MHZ
channel 0, cs 0, advanced training done
channel 1, cs 0, advanced training done
ddr_set_rate to 416MHZ, ctl_index 0
ddr_set_rate to 856MHZ, ctl_index 1
support 416 856 328 666 928 MHz, current 856MHz
OUT
Boot1: 2019-03-14, version: 1.19
CPUId = 0x0
ChipType = 0x10, 322
SdmmcInit=2 0
BootCapSize=100000
UserCapSize=14910MB
FwPartOffset=2000 , 100000
mmc0:cmd8,20
mmc0:cmd5,20
mmc0:cmd55,20
mmc0:cmd1,20
mmc0:cmd8,20
mmc0:cmd5,20
mmc0:cmd55,20
mmc0:cmd1,20
mmc0:cmd8,20
mmc0:cmd5,20
mmc0:cmd55,20
mmc0:cmd1,20
SdmmcInit=0 1
StorageInit ok = 68359
SecureMode = 0
SecureInit read PBA: 0x4
SecureInit read PBA: 0x404
SecureInit read PBA: 0x804
SecureInit read PBA: 0xc04
SecureInit read PBA: 0x1004
SecureInit read PBA: 0x1404
SecureInit read PBA: 0x1804
SecureInit read PBA: 0x1c04
SecureInit ret = 0, SecureMode = 0
atags_set_bootdev: ret:(0)
GPT 0x3380ec0 signature is wrong
recovery gpt...
GPT 0x3380ec0 signature is wrong
recovery gpt fail!
LoadTrust Addr:0x4000
No find bl30.bin
No find bl32.bin
Load uboot, ReadLba = 2000
Load OK, addr=0x200000, size=0xdd6b0
RunBL31 0x40000
NOTICE:  BL31: v1.3(debug):42583b6
NOTICE:  BL31: Built : 07:55:13, Oct 15 2019
NOTICE:  BL31: Rockchip release version: v1.1
INFO:    GICv3 with legacy support detected. ARM GICV3 driver initialized in EL3
INFO:    Using opteed sec cpu_context!
INFO:    boot cpu mask: 0
INFO:    plat_rockchip_pmu_init(1190): pd status 3e
INFO:    BL31: Initializing runtime services
WARNING: No OPTEE provided by BL2 boot loader, Booting device without OPTEE initialization. SMC`s destined for OPTEE will return SMC_UNK
ERROR:   Error initializing runtime service opteed_fast
INFO:    BL31: Preparing for EL3 exit to normal world
INFO:    Entry point address = 0x200000
INFO:    SPSR = 0x3c9


U-Boot 2020.07-armbian (Nov 23 2020 - 16:22:15 +0100)

SoC: Rockchip rk3399
Reset cause: WDOG
DRAM:  3.9 GiB
PMIC:  RK808 
SF: Detected w25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB
MMC:   mmc@fe320000: 1, sdhci@fe330000: 0
Loading Environment from MMC... *** Warning - bad CRC, using default environment

In:    serial
Out:   serial
Err:   serial
Model: Helios64
Revision: 1.2 - 4GB non ECC
Net:   eth0: ethernet@fe300000
scanning bus for devices...
Hit any key to stop autoboot:  0 
Card did not respond to voltage select!
switch to partitions #0, OK
mmc0(part 0) is current device
Scanning mmc 0:1...
Found U-Boot script /boot/boot.scr
3185 bytes read in 18 ms (171.9 KiB/s)
## Executing script at 00500000
Boot script loaded from mmc 0
219 bytes read in 15 ms (13.7 KiB/s)
16209617 bytes read in 1558 ms (9.9 MiB/s)
28582400 bytes read in 2729 ms (10 MiB/s)
81913 bytes read in 44 ms (1.8 MiB/s)
2698 bytes read in 34 ms (77.1 KiB/s)
Applying kernel provided DT fixup script (rockchip-fixup.scr)
## Executing script at 09000000
## Loading init Ramdisk from Legacy Image at 06000000 ...
   Image Name:   uInitrd
   Image Type:   AArch64 Linux RAMDisk Image (gzip compressed)
   Data Size:    16209553 Bytes = 15.5 MiB
   Load Address: 00000000
   Entry Point:  00000000
   Verifying Checksum ... OK
## Flattened Device Tree blob at 01f00000
   Booting using the fdt blob at 0x1f00000
   Loading Ramdisk to f4f71000, end f5ee6691 ... OK
   Loading Device Tree to 00000000f4ef4000, end 00000000f4f70fff ... OK

Starting kernel ...

[    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd034]
[    0.000000] Linux version 5.10.21-rockchip64 (root@hirsute) (aarch64-linux-gnu-gcc (GNU Toolchain for the A-profile Architecture 8.3-2019.03 (arm-rel-8.36)) 8.3.0, GNU ld (GNU Toolchain for the A-profile Architecture 8.3-2019.03 (arm-rel-8.36)) 2.32.0.20190321) #21.02.3 SMP PREEMPT Mon Mar 8 01:05:08 UTC 2021
[    0.000000] Machine model: Helios64
 


 

 

 

 

 

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines