berturion Posted May 9, 2016 Posted May 9, 2016 Hello, I have a Cubieboard2 and armbian in a vanilla kernel with jessie flavor. This board has a 2 cores processor. When I am logging in, load average is shown and becomes red when it is greater than 1.0. Since, there are 2 cores, it should be colorized only when it reaches 2.0. Is it possible to have this feature (taking care of each board's number of cores) ? Thank you
tkaiser Posted May 9, 2016 Posted May 9, 2016 Since, there are 2 cores, it should be colorized only when it reaches 2.0. Absolutely not! The whole average load thing on Linux SBCs is not that much related to CPU utilisation, mostly irrelevant and not understood by nearly 100% of users. To display it at all is close to a mistake since it's misleading. Read here the most important sentence in the comment. It's unprecise everywhere and on Linux not related to CPU only. In other words: Choose an SD card that shows slow random I/O performance and your load is always higher. Choose a fast one and it will be lower. And with Armbian (SBC) you get really often in situations where load has absolutely nothing to do with CPU utilisation since it's just indicating processes stuck on I/O. If you're a server administrator then by looking at all three meaningless numbers after logging in you might be able to figure out where to look next and whether you missed the problem or not: http://techblog.netflix.com/2015/11/linux-performance-analysis-in-60s.html What you want is numbers without meaning with different colors done in a wrong way. BTW: I'm absolutely no fan of this motd approach at all since it's misleading anyway only to show actual numbers and not peaks since last login. But since people love meaningless numbers if they're displayed using colors... EDIT: I know why the average user likes this 'average load' crap. Since it's just a number that can be misinterpreted as 'health indicator' or something like that without digging depper. And when it's colorized it looks even nicer. But this number, especially the 1 minute number only, is not not helpful at all. Average load might be a nice starting point to understand Linux performance metrics and why your system behaves sluggish when you use the wrong SD card (you'll need the sysstat package and then monitor your SBC using iostat while you look at this meaningless average load crap to get the idea what's going on)
berturion Posted May 12, 2016 Author Posted May 12, 2016 If we have 2 systems, same linux OS, same hardware, same running processes, same SD Card. The only difference is that one has one core and the other, two. Is load average will be the same ?
tkaiser Posted May 12, 2016 Posted May 12, 2016 If we have 2 systems, same linux OS, same hardware, same running processes, same SD Card. The only difference is that one has one core and the other, two. Is load average will be the same ? Do I care? NOT AT ALL! Why? Since I digged deeper into the 'average load' concept (simple conclusion: Forget about it, it's just a simply indicator where to look next IF all three load values are displayed which is NOT the case currently with Armbian) and since I hate fooling myself. If you want to change the color an absolutely irrelevant number is displayed with at login based on something that is even more irrelevant (count of CPU cores) you just confirm what it's all about: Fooling yourself. You want colors without meaning for numbers without meaning (and obviously you overread that average load shows also I/O bottlenecks). That's all. It would be something different if average load in Linux would be solely CPU based and if motd would display the average load peak since last login. Why should the specific load at the time you login be relevant at all?
berturion Posted May 13, 2016 Author Posted May 13, 2016 I might be wrong but I think you are a little upset... Sorry if I am responsible for this. I understand what you mean, and I just try to find a way to display more relevant information. The aim of my previous question was to know if coloring in red when it reaches 1.0 was relevant or not. If the answer of my question was "yes", then it was. If the anwser was no, it wasn't. Now, if I understand, coloring in red after 1.0 is not relevant because number of cpu and cores have an impact (but it is not the only one) so, this number, whatever its value, should stay green or white. I agree that the 2 other numbers should be display in order to have a kind of relevant information. So why not simply display those 3 numbers in white ? Or not display any of them at all ? Or why keep this value and its changing color in login script if it is totally useless ?
Toast Posted May 13, 2016 Posted May 13, 2016 Tkaiser serious as a heartattack, you really need to lighten up a bit..
tkaiser Posted May 13, 2016 Posted May 13, 2016 I might be wrong but I think you are a little upset... Sorry, that's just my usual suada against numbers without meaning (be it wrong performance metrics, monitoring sources or benchmarks). And I'm mostly angry about myself since I'm not able to improve the situation (still WiP: a fully working armbianmonitor-daemon approach that could be used to display data of interest -- peak values from the past -- and not meaningless stuff like actual average 1m load). Average load is misunderstood by most people especially when dealing with Linux where processes that are stuck in IO add to the load count. If you use a very slow SD card and do an 'apt-get upgrade' after a long time then average load might easily exceed 5 or even more. And that's mostly not CPU related since processes are waiting for others to finish. If you choose an SD card with high random IO performance instead the load will be way lower and at the same time CPU utilisation higher (not that much waiting for IO bound processes). So what does average load tell you on a Linux SBC? Pretty much nothing without further analysis. Imagine you use Armbian on a server where's something going wrong from time to time (for example a nightly cron job). Now you log in the next day and motd shows you "0.1 1m avg load". So what does average load at login time tell you? Absolutely nothing or only 'right now everything seems to be fine'. Since the average user thinks this load thingie would correlate with CPU utilisation (not true on SBCs running Linux) and that he likes simple health indicators even if they're wrong in my opinion displaying the average load at login time is not only numbers without meaning but also 100% misleading. As it's implemented now by Igor it's just some sort of an idle indicator. If it's green your system is more or less idle RIGHT NOW but if it's above 1 and displayed red you would need to investigate further which is also absolutely useless since there's nothing wrong with a load above 1 from time to time (it only displays the actual value!). Only real monitoring would help. And that would mean: NOT relying on avg load since you never know whether higher values are CPU or IO bound, query data sources like /proc/stat every few seconds/minutes to feed round robin databases, query these database at login to display values of interest (eg. peak CPU utilization within the last 24 hours so you get the idea to look into RPi-Monitor graphs) In it's current implementation avg load display via motd is just a very nice way to fool yourself the way this value is misinterpreted by 99,99% of users (I do server administration and it's scary even how many Linux administrators don't know about how avg load is calculated on Linux).
berturion Posted May 14, 2016 Author Posted May 14, 2016 At least, my thread will teach armbian forum readers about how load average is calculated. Me the first. I am looking forward trying your armbianmonitor-daemon
Recommended Posts