Jump to content

5.35/5.36 bug / questions collection


tkaiser

Recommended Posts

On 7/12/2017 at 4:03 PM, bartek said:

Hi.

 

I have got problem with https://www.waveshare.com/wiki/7inch_HDMI_LCD_(C) display in 5.36 (lite). in 5.31 version it worked (after some changes in script.fex). Now i only see pink screen - nothing else.

here are settings, that are good in 5.31:

 

[disp_init]
disp_init_enable = 1
disp_mode = 0
screen0_output_type = 3
screen0_output_mode = 5
screen1_output_type = 3
screen1_output_mode = 5
fb0_width = 1024
fb0_height = 600
fb1_width = 1024
fb1_height = 600

[hdmi_para]
hdmi_used = 1
hdmi_x = 1024
hdmi_y = 600
hdmi_power = "vcc-hdmi-18"
hdmi_cts_compatibility = 1

 

 

also added in /boot/armbianEnv.txt following:

disp_mode="EDID:1280x720p60"

disp_mem_reserves=on

 

to be similar as in 5.31

 

 

But when i start with connected another monitor and then re-plug waveshare, then i see everything

 

thanks

 

I have your same problem. Pink screen after upgrading from 5.31 to 5.35/5.36.
But im using a regular LCD monitor screen via HDMI with DVI adapter. Upgrading to 5.35/5.36 screwed my output.

Board still works, but i can only use it through ssh/vnc. This is a problem to me because there are some times that i dont have connection to the board and i have to use a monitor to configure some things on the Pi.

Link to comment
Share on other sites

2 hours ago, BloodElf said:

Board still works, but i can only use it through ssh/vnc. This is a problem to me because there are some times that i dont have connection to the board and i have to use a monitor to configure some things on the Pi.


Even you changed settings in script.bin again?

Link to comment
Share on other sites

5 hours ago, BloodElf said:

This is a problem to me because there are some times that i dont have connection to the board and i have to use a monitor to configure some things on the Pi.

You could use a USB-TTL-serial adapter or a ESP8266 to connect to the serial-debug port:

 

Link to comment
Share on other sites

Hi, thanks for the 5.35/5.36 release!

I have Lime and Lime2 and my (sad) choice is ubuntu legacy desktop.

On Lime everything works nice, but on Lime2 kernel doesn't start. I got message "Starting kernel ...", then flick of the screen and then device powers down.

I've tried several SD cards, multiple power supplies, different boards of a kind, all is proven to work with older image.

Could you give me a tip how to debug it? Thanks

Link to comment
Share on other sites

16 minutes ago, BloodElf said:

I've never touched script.bin

 

But you used h3disp before since prior to the upgrade you didn't have a pink screen and now you have?

 

If that's the case you need to re-apply settings again ('h3disp -d' to get non-pink screen with HDMI-DVI adapters) and this time they might stick since I tried to fix h3disp recently but don't know whether this update made it into latest release or not.

Link to comment
Share on other sites

Just now, tkaiser said:

 

But you used h3disp before since prior to the upgrade you didn't have a pink screen and now you have?

 

If that's the case you need to re-apply settings again ('h3disp -d' to get non-pink screen with HDMI-DVI adapters) and this time they might stick since I tried to fix h3disp recently but don't know whether this update made it into latest release or not.

Well, yes, i did use h3disp.

But also after the update i run h3disp -m 10 -d for an hdmi-dvi monitor with no luck, still pink screen.

 

I will try to run it now and see if it changes anything.

Link to comment
Share on other sites

18 minutes ago, BloodElf said:

I will try to run it now and see if it changes anything

 

Please add -v and provide the URL: h3disp -v -m 10 -d

 

And of course h3disp needs a reboot for changes to take effect since all this tool does is adjusting script.bin which is only read at boot time.

Link to comment
Share on other sites

Just wanting to confirm I am seeing ~10%CPU load with kworker process after apt-get upgrade on Bpi Armbian 5.31 4.11.5-sunxi.

This is a clean install with only timezone and static IP set.  Armbian 5.31 4.11.5-sunxi has 0% load until doing an upgrade and then get 10% load.

Also tried fresh install of Armbian_5.35_Bananapi_Debian_stretch_next_4.13.16 but it had the same 10% load from first install.

Is it worth starting a new thread on this bug?

Cheers

Andy

Link to comment
Share on other sites

Is there a dedicated thread for this? I mean, kworker CPU load issue.

I have the same problem however I'm an archlinux user, so it seems applying to these conditions:

1) A20 CPU (or even other allwinner CPUs like H3)

2) kernel version >= 4.12 (4.12.x 4.13.x 4.14.x)

 

should be a BUG introduced by sunxi mainline effort, but I don't know where to report it.

Link to comment
Share on other sites

I had no time to do more debugging with this, I read some commits but did not find anything new.

 

You can try to ask linux-sunxi mailing list / or maybe their irc channel.

 

I have not tried any H3 build with mainline , I'll check with Pine64 soon.

Link to comment
Share on other sites

I have the problem that does not correct the update notice on my Opi + system. sudo apt update / upgrade I have already tried. How can I update the message?

System: Orange Pi+ Armbian 5.36

 

Welcome to ARMBIAN 5.36 user-built Ubuntu 16.04.3 LTS 3.4.113-sun8i   
System load:   0.00 0.01 0.05      Up time:       16 min        
Memory usage:  6 % of 2014MB     IP:            192.168.178.42
CPU temp:      39°C               
Usage of /:    24% of 15G        

[ 0 security updates available, 3 updates total: apt upgrade ]
Last check: 2017-11-25 00:00

[ General system configuration (beta): armbian-config ]

Last login: Sat Jan 13 11:55:33 2018 from 192.168.178.20

 

--> sudo apt clean  does it !

 

Link to comment
Share on other sites

Majority of problems mentioned in this topic, those possible to fix, were fixed and images have been rebuild. They are not linked yet to the download pages since not all are tested, but those are:

  • Banananpi
  • Cubietruck
  • Banananpi M2
  • Beelink X2
  • Helios4
  • Nanopim3/fire3
  • Orangepi Win
  • Odroid C2 NEXT
  • XU4 NEXT = 4.14
  • Orangepi PC NEXT
  • Tinkerboard
  • Udoo

H3/H5 NEXT/mainline support remain experimental/testing so I didn't bother for in-deep testing at this stage.

 

You can find 5.38 images under dl.armbian.com/$boardname/archive


Perhaps this way of testing will be better also for a major update? Few more test would help, but since this is more or less a bugfix update, things should just work.

Link to comment
Share on other sites

testing 5.38 on cubietruck since yesterday.

everything seems smooth apart from kworker eating CPU issue still there that probably can't be fixed yet (sunxi mainline bug?) 

 

I'll further test and report back other issues before downgrade kernel/headers to 5.32 (4.11.6 is last kernel "kworker bugfree")

 

thanks for your work. running Debian stretch for server miltipurpose with success on a cubietruck is f*cking awesome

 

 

Link to comment
Share on other sites

Installed 5.38 (mainline) on two Banana Pi. Both have /boot on SD and system (Debian Stretch) on SATA. Both up and running after a reboot (required to activate kernel 4.14.15). No problems so far. I can confirm ~10% CPU load by kworker though.

Thanks for Armbian and keep up the good work!

 

EDIT: Sorry, that was misleading: I did not do a fresh install from an image. Both Banana Pi were upgraded from Armbian 5.36

Edited by Bernie_O
Link to comment
Share on other sites

So I was beat ;) but I can also confirm that a new install is working fine.

 

I'm updating another banana pi that was previously running 5.36 and report back.

 

About the kworker, the problem was diagnosed on linux-sunxi mailing list : https://groups.google.com/forum/#!searchin/linux-sunxi/kworker|sort:date/linux-sunxi/bVEsjQRNVSo/nTa3rpOyAQAJ

 

If you really want to remove this load you can unload the two modules but I don't know if you'll still have thermal protection on your A20. I'm tempted to try it as it does not run too hot.

Link to comment
Share on other sites

I confirm 2 kworker processes constantly using CPU:

- kworker/1:0   12-18%

- kworker/0:0   2-8%

on fresh Stretch 5.38 on Banana Pro running from fast microSDHC card.

Both processes use together ~21% CPU all the time.

 

kworker/1:0  12-18%  (mostly gc_worker for nf_conntrack) - even on disabled IP forwarding & NAT, and almost no traffic:

Spoiler

     kworker/1:0-4677  [001] d.s.  3327.450839: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3327.783921: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3328.234320: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] dns.  3328.572152: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] dns.  3328.572159: workqueue_queue_work: work struct=c0dd38fc function=neigh_periodic_work workqueue=ef004100 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3328.906248: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3330.162264: workqueue_queue_work: work struct=ef1f0238 function=thermal_zone_device_check workqueue=ef004800 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3330.162273: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] dns.  3330.915139: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3331.253086: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] dns.  3331.380202: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3331.701743: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3332.020383: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3332.242417: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3333.530271: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3333.984925: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3334.110458: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3334.554469: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.h.  3334.645703: workqueue_queue_work: work struct=ee606038 function=dbs_work_handler workqueue=ef004900 req_cpu=1 cpu=1
     kworker/1:0-4677  [001] d.s.  3334.680282: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] dns.  3335.434667: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3335.642247: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] dns.  3335.770308: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3337.131293: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3337.131303: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/1:0-4677  [001] d.s.  3337.473827: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3337.681434: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.h.  3338.008975: workqueue_queue_work: work struct=ee606038 function=dbs_work_handler workqueue=ef004900 req_cpu=1 cpu=1
     kworker/1:0-4677  [001] d.s.  3338.690494: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3339.023555: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.h.  3339.124835: workqueue_queue_work: work struct=ee606038 function=dbs_work_handler workqueue=ef004900 req_cpu=1 cpu=1
     kworker/1:0-4677  [001] d.s.  3339.150394: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3339.673979: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3340.250413: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] dns.  3341.430444: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3341.851540: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3342.183949: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.h.  3343.602156: workqueue_queue_work: work struct=ee606038 function=dbs_work_handler workqueue=ef004900 req_cpu=1 cpu=1
     kworker/1:0-4677  [001] d.h.  3344.813098: workqueue_queue_work: work struct=ee606038 function=dbs_work_handler workqueue=ef004900 req_cpu=1 cpu=1
     kworker/1:0-4677  [001] d.s.  3345.030569: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3345.474655: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3345.812370: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3345.940544: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3346.135327: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1
     kworker/1:0-4677  [001] d.s.  3346.260687: workqueue_queue_work: work struct=bf9d61c4 function=gc_worker [nf_conntrack] workqueue=ef004d00 req_cpu=4 cpu=1

 

kworker/0:0: 2-8% (even with disabled cursor flashing on framebuffer screen by:  echo -n -e '\e[?17;14;224c' > /dev/tty0 and no vmstat running)

Spoiler

     kworker/0:0-4472  [000] d...  3321.160036: workqueue_queue_work: work struct=ef6c8d6c function=vmstat_update workqueue=ef0d0200 req_cpu=0 cpu=0
     kworker/0:0-4472  [000] d...  3322.180133: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] dns.  3324.043883: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3325.132911: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d...  3325.132960: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d.s.  3326.148419: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3326.482283: workqueue_queue_work: work struct=c9c4e264 function=phy_state_machine workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3327.132176: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d...  3327.132209: workqueue_queue_work: work struct=ef6c8d6c function=vmstat_update workqueue=ef0d0200 req_cpu=0 cpu=0
     kworker/0:0-4472  [000] d...  3327.132213: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d.s.  3329.255012: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d...  3329.255081: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d.s.  3329.602407: workqueue_queue_work: work struct=c9c4e264 function=phy_state_machine workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3329.713050: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3331.141990: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d...  3331.142054: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d.s.  3333.123877: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] dns.  3333.762867: workqueue_queue_work: work struct=c9c4e264 function=phy_state_machine workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d...  3334.160291: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d.s.  3334.332489: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3334.332506: workqueue_queue_work: work struct=c0e4a308 function=do_cache_clean workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3334.751998: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d...  3336.160418: workqueue_queue_work: work struct=ef6c8d6c function=vmstat_update workqueue=ef0d0200 req_cpu=0 cpu=0
     kworker/0:0-4472  [000] d...  3336.160424: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d.s.  3338.115289: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d...  3338.160450: workqueue_queue_work: work struct=ef6c8d6c function=vmstat_update workqueue=ef0d0200 req_cpu=0 cpu=0
     kworker/0:0-4472  [000] d...  3338.160455: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] dns.  3340.006930: workqueue_queue_work: work struct=c9c4e264 function=phy_state_machine workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] dns.  3340.006936: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3340.214584: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3342.198876: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] dns.  3342.733898: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] dns.  3342.941555: workqueue_queue_work: work struct=ef1f15a4 function=fb_flashcursor workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3344.166953: workqueue_queue_work: work struct=c9c4e264 function=phy_state_machine workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3344.166958: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3345.141715: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3346.482925: workqueue_queue_work: work struct=ef373864 function=laundromat_main workqueue=ee054200 req_cpu=4 cpu=4294967295
     kworker/0:0-4472  [000] d.s.  3346.482955: workqueue_queue_work: work struct=c9c2ad40 function=wb_workfn workqueue=ef236800 req_cpu=4 cpu=4294967295
     kworker/0:0-4472  [000] d...  3347.140672: workqueue_queue_work: work struct=ef6c8d6c function=vmstat_update workqueue=ef0d0200 req_cpu=0 cpu=0
     kworker/0:0-4472  [000] d...  3347.140677: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d...  3350.160736: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d...  3353.120828: workqueue_queue_work: work struct=ef6d9d6c function=vmstat_update workqueue=ef0d0200 req_cpu=1 cpu=1
     kworker/0:0-4472  [000] d.s.  3354.221555: workqueue_queue_work: work struct=c0dd0f14 function=neigh_periodic_work workqueue=ef004100 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d.s.  3354.221565: workqueue_queue_work: work struct=c0d1c448 function=vmstat_shepherd workqueue=ef004900 req_cpu=4 cpu=0
     kworker/0:0-4472  [000] d...  3354.221626: workqueue_queue_work: work struct=ef6c8d6c function=vmstat_update workqueue=ef0d0200 req_cpu=0 cpu=0

 

iostat doesn't show any I/O on disk or microsd card and memory usage is at ~30%, not swapping.

 

I tried stopping most of the running processes but that didn't change anything, too.

 

It's quite irritating that having not so much CPU power, performance is even more impacted (visible in my case for any web services configured again later - openmediavauilt, owncloud, traccar,...).

Previously I used similar configuration but on much older Bananian Wheezy (still up-to-date from repos) with kernel 3.4.x and kworkers took there some cycles only when I/O was high, but not impacting performance that much, especially in idle.

Link to comment
Share on other sites

On 1/31/2018 at 7:29 AM, vlad59 said:

So I was beat ;) but I can also confirm that a new install is working fine.

 

I'm updating another banana pi that was previously running 5.36 and report back.

 

About the kworker, the problem was diagnosed on linux-sunxi mailing list : https://groups.google.com/forum/#!searchin/linux-sunxi/kworker|sort:date/linux-sunxi/bVEsjQRNVSo/nTa3rpOyAQAJ

 

If you really want to remove this load you can unload the two modules but I don't know if you'll still have thermal protection on your A20. I'm tempted to try it as it does not run too hot.

Thx. Unloading sun4i_gpadc_iio kernel module:

sudo rmmod sun4i_gpadc_iio

reduced in my case CPU usage by kworker processes from ~21% to floating 5-12% (without trace log change).

 

While sun4i_gpadc is still loaded and unloading it doesn't make anything better than this, however got single error about reading thermal in dmesg:

[ 5498.530760] thermal thermal_zone0: failed to read out thermal zone (-110)

 

Edit:

And one more thing - on kernel 4.14.15-sunxi I'm not able to set CPU frequency above 960MHz:

$ echo 1008000 | sudo tee /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
1008000
$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
960000
$

The board worked fine so far @1008MHz 24h/7 for few years on 3.4.x

Link to comment
Share on other sites

10 hours ago, xeros said:

 

Edit:

And one more thing - on kernel 4.14.15-sunxi I'm not able to set CPU frequency above 960MHz:


$ echo 1008000 | sudo tee /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
1008000
$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
960000
$

The board worked fine so far @1008MHz 24h/7 for few years on 3.4.x

 

I also noticed that a year ago when I switched to mainline on my Banana, IIRC I talked to tkaiser about it and it was a normal side effect. I did not try to change anything about as my banana pi stay mostly idle and only use their CPU when doing compression and that does not happen very often.

 

The max freq is defined in the device tree so you can change it on userland

Link to comment
Share on other sites

23 hours ago, vlad59 said:

 

I also noticed that a year ago when I switched to mainline on my Banana, IIRC I talked to tkaiser about it and it was a normal side effect. I did not try to change anything about as my banana pi stay mostly idle and only use their CPU when doing compression and that does not happen very often.

 

The max freq is defined in the device tree so you can change it on userland

Thanks, I think I'll have to do something about that as I use its CPU to the max serving variety of services from it to all devices in my home.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines