Jump to content


  • Posts

  • Joined

 Content Type 


Member Map




Posts posted by tkaiser

  1. 1 hour ago, JMCC said:

    I find it hard to imagine a system administrator pulling an OPi Zero out of his pocket in front of the corporate board and saying "Hey, this is going to be our new support for the crucial data. Don't worry, it is going to run Armbian out of a Sandisk Extreme Pro, which has been proven to be the most reliable SD card in many forum posts".


    Who's talking about system administrators? It's about Armbian users now encouraged to rely on ZFS based on proposals at the top post in this thread and eg. support for ZFS in Armbian's bloated motd output.


    Are these users aware of the two main problems putting their ZFS pools or the data inside at risk?

    • Are they aware of the 'correct write barrier semantics' issue especially with crappy storage hardware as it's definitely more rule than exception with SBC (some random USB-to-SATA bridge combined with freezes or power losses might be recipe for disaster with both ZFS and btrfs)?
    • Are they aware that if they install ZFS via DKMS they might end up with something like this (a kernel update will be rushed out for certain boards, building zfs or spl modules will fail in certain combinations and affected users will realize only after next reboot for the kernel update to take effect that their pools are unavailable). How much testing does each and every kernel update receive? You know the answer just like me. And it's easy to understand the difference with Ubuntu or Debian on amd64 where there are only just a few kernel variants that can be tested through with reasonable efforts)

    And then there's another issue called 'confusing redundancy with backup' which affects an awful lot of home users. They think if they do some sort of RAID their data would be save. In situations where crappy storage hardware with old filesystems will just result in some unnoticed silent data corruption using advanced attempts like ZFS or btrfs can result in the whole pool/filesystem becoming inaccessible. While professionals then restore from backup the average home users will just cry.


    If users are encouraged to rely on a certain FS implementation based on FUD (data losses with btrfs) or theoretical superiority '(the guys at Lawrence Livermore National Laboratory' for example) that's a bit unfair since they're affected by practical limitations of the hardware Armbian is mostly used on and the practical limitations of another group of guys. Btrfs on the other hand is built right into the kernel and receives a lot more testing than the ZFS situation at Armbian allows for (dealing already with +30 kernel branches and complaining about having no resources constantly since years).


    BTW: thank you, I edited my post above and exchanged ARM with Armbian. 

  2. 17 hours ago, TRS-80 said:

    I just read way too many reports of data loss on btrfs


    Guess why you hear so often about data loss with btrfs?


    Three main reasons:


    1) Shooting the messenger: Putting a btrfs on top of hardware raid, mdraid or lvm is as 'great' as doing the same with ZFS but 'Linux experts' and NAS vendors did it and still do it. Data corruption at the mdraid layer resulting in a broken btrfs on top and guess who's blamed then? https://github.com/openmediavault/openmediavault/issues/101#issuecomment-473920806


    2) btrfs lives inside the kernel and as such you need a recent kernel version to escape old and well known bugs. Now look at kernel versions in popular distros like Debian and you get the problem. With ZFS that's different, most recent ZoL versions still run well with horribly outdated kernels. Hobbyists look into btrfs wiki, see that a certain issue has been fixed recently so they're gonna use it forgetting that the 'everything outdated as hell' distro (AKA Debian) they're using is still on an ancient 4.9 kernel or even worse.


    3) Choice of hardware. ZFS is most of the time used on more reliable hardware than btrfs and that's mostly due to some BS spread by selected FreeNAS/TrueNAS forum members who created the two urban myths that 'running ZFS without ECC RAM will kill your data' (nope) and 'you'll need at least 8GB of RAM for ZFS' (also BS, I'm running lots of VMs with ZFS with as less as 1GB). While both myths are technically irrelevant a lot of people believe into and therefore invest in better hardware. Check which mainboards feature ECC memory and you'll realize that exactly those mainboards are the ones with proper onboard storage controllers. If you believe you need at least 8 GB of RAM this also rules out a lot of crappy hardware (like eg. vast majority of SBCs).


    The main technical challenge for majority of modern filesystem attempts (even for journaled ext4) is correct write-barrier semantics (more on this in the referenced github issue above). Without those both ZFS and btrfs will fail in the same way. So using an SBC with flaky USB host controller combined with crappy USB-to-SATA controller in external enclosure is recipe for desaster with both ZFS and btrfs. ZFS is more mature than btrfs but guess what has happened when ZFS was rather new over a decade ago: people lost their pools due to broken write-barrier semantics caused by erratic HBAs and drive cache behavior (reporting back 'yeah, I've written the sector to spinning rust' while keeping data in some internal cache and after a power loss the pool was gone).


    The last time I wasted time with ZFS on ARM it was a true desaster. Over a year ago I tried to run several ODROID HC2 as Znapzend targets for a bunch of fileservers but always ran into stability issues (freezes) after a couple of days/weeks. Switched to btrfs just to check whether all the HC2 might have a hardware problem: a year later realized by accident that the HC2's in question all had an uptime of 300 days or above (I stopped doing kernel updates on isolated Armbian machines since Igor bricked already way too much of my boards over the past years).


    Do I prefer btrfs over ZFS? Nope. Being a real ZFS fanboi almost all the storage implementations here and at customers are based on ZFS. On x86 hardware due to not only relying on OpenZFS team but also the guys at Canonical who care about 'ZFS correctly built and fully working with the kernel in question' (on Debian installations pulling in the Proxmox kernel which is latest Ubuntu kernel with additional tweaks/fixes).


    And that's the main difference compared to the situation with Armbian. There is no such team that takes care about you being able to access your ZFS pools flawlessly after the next kernel update. Which is the reason why the choice of filesystem (btrfs or ZFS) at my place depends on x86 (ZFS) or ARM (btrfs). And that's not because of the CPU architecture but solely due to with x86 being able to rely on professionals who take care about the updates they roll out vs. situation on ARM with a team of hobbyists who still try harder and harder to provide OS images for as much different SBC as possible so it's absolutely impossible to properly test/support all of them. Playground mode.

  3. On 9/22/2020 at 6:34 AM, Werner said:

    Also curios about the USB3 performance. 


    The BPi M5 is just an ODROID C4 clone (check also the serial console output in the wall of text above) so simply look there. They even took the same VL817 hub (a way better choice than the Genesis Logic thingy on the N2+) but of course decided to change the power circuitry so that their users suffer from the usual Banana undervoltage drama (unlike the C4 which is up to the task powering a number of USB3 consumers with stable 5V unlike the M5 here).

  4. 18 hours ago, vzoltan said:

    do you say this should work (i.e. all CPU cores handling interrupts) and for some reason buggy with Armbian's N2 distro?


    The reason is that nobody at Armbian cares any more about such low level stuff. The string 'meson-g12b' (N2's board family) is missing from the case construct in https://github.com/armbian/build/blob/master/packages/bsp/common/usr/lib/armbian/armbian-hardware-optimization so what you see is what is to be expected. All IRQ handling on cpu0 therefore being a nice bottleneck.


    Some people think irqbalanced would help but at least in the past it was common knowledge that for stuff like storage or networking static IRQ affinity is the way to go.


    BTW: You have massive filesystem corruption on the /var/log partition. As for your storage issues a simple web search for 'odroid n2 usb issues' might help.

  5. 1 hour ago, Igor said:

    I don't recall why I did that


    Well, that's why commit comments exist. The rockchip64 kernel has the following cpufreq OPP: 408000 600000 816000 1008000 1200000 1296000


    So setting 600 MHz didn't do a lot other than causing confusion. A third of this thread's posts deal with cpufreq governor confusion wrongly assuming the SoC being on 600 MHz would be the root cause for the thermal anomalies R2S is plagued with.


    Anyway, this whole thread is bizarre. Why do users not simply verify the numbers some driver spits out? Why blindly trusting in numbers? Those users having the hardware right in front of them could've tested long ago whether the thermal readouts are BS or the hardware. Simply using their thumb and putting it on the heatsink. 

  6. 37 minutes ago, @lex said:

    Practical experiments here indicated that the performance governor and 600 MHz heated up the board very soon in idle mode




    There's no difference between 408 MHz and 600 MHz since the DVFS OPP for both are pretty low.


    That's RK3328 powered Renegade idling at 408 MHz:

    root@renegade:/home/tk# armbianmonitor -m
    Stop monitoring using [ctrl]-[c]
    Time        CPU    load %cpu %sys %usr %nice %io %irq   CPU  C.St.
    12:47:02: 1296MHz  0.19   1%   1%   0%   0%   0%   0% 45.5°C  0/5
    12:47:07:  408MHz  0.17   2%   1%   0%   0%   0%   0% 43.6°C  0/5
    12:47:12:  408MHz  0.16   1%   0%   0%   0%   0%   0% 45.0°C  0/5
    12:47:17:  408MHz  0.15   1%   0%   0%   0%   0%   0% 44.1°C  0/5
    12:47:23:  408MHz  0.13   1%   0%   0%   0%   0%   0% 42.7°C  0/5
    12:47:28:  408MHz  0.12   1%   0%   0%   0%   0%   0% 44.5°C  0/5
    12:47:33: 1296MHz  0.11   1%   1%   0%   0%   0%   0% 46.4°C  0/5^C

    That's the exact same hardware with exact same load when setting minimum cpufreq to 600 MHz (no idea why @Igor did this change back in November though, the 600 MHz are not the result of the cpufreq governor doing anything but of the project's owner commiting some changes for whatever reason):

    root@renegade:/home/tk# armbianmonitor -m
    Stop monitoring using [ctrl]-[c]
    Time        CPU    load %cpu %sys %usr %nice %io %irq   CPU  C.St.
    13:42:02: 1296MHz  0.00   2%   0%   1%   0%   0%   0% 43.6°C  0/5
    13:42:07:  600MHz  0.00   0%   0%   0%   0%   0%   0% 44.5°C  0/5
    13:42:12: 1296MHz  0.00   2%   1%   0%   0%   0%   0% 44.5°C  0/5
    13:42:17:  600MHz  0.00   0%   0%   0%   0%   0%   0% 43.2°C  0/5
    13:42:23:  600MHz  0.00   1%   0%   0%   0%   0%   0% 43.6°C  0/5
    13:42:28:  600MHz  0.00   1%   0%   0%   0%   0%   0% 43.6°C  0/5
    13:42:33:  600MHz  0.00   0%   0%   0%   0%   0%   0% 45.0°C  0/5
    13:42:38:  600MHz  0.00   0%   0%   0%   0%   0%   0% 44.1°C  0/5^C

    And now compare with @devman's results above using R2S without the yellow plastic oven which are still close to 15°C above Renegade/Rock64. So it's obviously not an RK3328 problem users are talking here about!


    Wrt ondemand governor. This governor has some tunables that need to be set accordingly depending on kernel version. But since nobody in Armbian gives a sh*t about such low level stuff, it's as it is. This most probably would need some attention: https://github.com/armbian/build/blob/master/packages/bsp/common/usr/lib/armbian/armbian-hardware-optimization#L81-L83 


    The last time I asked for testers I got zero useful responses:



  7. 25 minutes ago, guidol said:

    that hot as the Rockchip-CPUs.


    Are you kidding? Again: 


    There's no problem with the RK3328, this is just another boring Quad-Core A53 in 28nm (just like the H6). So the question remains what's wrong with NanoPi R2S and not the RK SoCs.


    And I really don't understand why so many blindly trust in numbers. There's a thermal sensor inside the SoC, there's a reference voltage, there's some calibration needed, there's driver code. The idea that the numbers this driver spits out are somewhat or even closely related to the actual temperature of the SoC in question is just a hope!


    Care to remember what you yourself already reported?


  8. On 6/13/2020 at 5:50 AM, devman said:

    Idle temps drop by ~10c without the enclosure. No issues with connections between the heatsink and the cpu.


    So there's something seriously wrong with this hardware or kernel code. 60°C is something noticeable by 'thumb test'. Does it hurt if you press your thumb on the heatsink while Armbian reports above 55°C SoC temperature?


    Have you done a test with an image using Rockchip's BSP kernel (4.4)? Just to compare which temperatures are reported there?


    @@lex Why do you think a reported clockspeed of 600 MHz in idle would be an indication of wrong governor? Igor defined 600 MHz for whatever reasons half a year ago as minimum clockspeed but there shouldn't be much of a difference between the 408 MHz before and 600 MHz now.

  9. On 6/10/2020 at 9:23 PM, Igor said:

    This chip is powerful and it runs hot, in a small box its just worse. Nothing is wrong with your device, nor software ... a fan or clocking it down by default is the only solution.


    This is a Renegade featuring the same 'powerful and hot running' RK3328 with large passive heatsink without enclosure:

     ____                                 _      
    |  _ \ ___ _ __   ___  __ _  __ _  __| | ___ 
    | |_) / _ \ '_ \ / _ \/ _` |/ _` |/ _` |/ _ \
    |  _ <  __/ | | |  __/ (_| | (_| | (_| |  __/
    |_| \_\___|_| |_|\___|\__, |\__,_|\__,_|\___|
    Welcome to Armbian buster with Linux 5.4.8-rockchip64
    System load:   0.24 0.05 0.02  	Up time:       41 days		
    Memory usage:  12 % of 1986MB 	Zram usage:    7 % of 993Mb  	IP:  
    CPU temp:      46°C           	
    Usage of /:    40% of 7.3G   	storage/:      16% of 7.3T   	
    Last login: Sun May 31 12:39:32 2020 from
    tk@renegade:~$ sudo -s
    [sudo] password for tk: 
    root@renegade:/home/tk# armbianmonitor -m
    Stop monitoring using [ctrl]-[c]
    Time        CPU    load %cpu %sys %usr %nice %io %irq   CPU  C.St.
    07:31:13: 1296MHz  0.20   0%   0%   0%   0%   0%   0% 45.9°C  0/6
    07:31:18:  408MHz  0.19   0%   0%   0%   0%   0%   0% 42.7°C  0/6
    07:31:23:  408MHz  0.17   2%   1%   1%   0%   0%   0% 39.5°C  0/6
    07:31:29:  408MHz  0.16   1%   0%   0%   0%   0%   0% 44.1°C  0/6
    07:31:34:  408MHz  0.14   1%   0%   0%   0%   0%   0% 42.3°C  0/6
    07:31:39:  408MHz  0.13   1%   0%   0%   0%   0%   0% 41.8°C  0/6
    07:31:44:  408MHz  0.12   1%   0%   0%   0%   0%   0% 43.2°C  0/6
    07:31:49: 1296MHz  0.33   2%   1%   0%   0%   0%   0% 47.3°C  0/6
    07:31:55:  408MHz  0.30   1%   0%   0%   0%   0%   0% 41.8°C  0/6^C

    Rock64 is based on the very same RK3328 SoC, my board has just a laughable tiny heatsink on the SoC and all of the 8 sbc-bench results collected here https://github.com/ThomasKaiser/sbc-bench/blob/master/Results.md were done without active cooling. Even with the most demanding cpuminer benchmark the SoC temperature was never reported as above 70°C and running cpuminer at 1.4GHz was possible without any throttling.


    This is NanoPi R2S using the very same RK3328 doing exactly nothing at all at around or above 70°C: https://tech.scargill.net/nanopi-r2s-openwrt-minirouter/


    If idle temperatures of NanoPi R2S without that little yellow plastic oven are above 50°C then there's clearly something wrong (maybe something trivial as a wrong supply voltage resulting in the thermal sensor reporting BS just as we saw on Orange Pi Zero). Utilizing FriendlyELEC's little yellow plastic oven of course will result in insane temperatures.

  10. On 3/7/2019 at 8:40 AM, Tido said:

    Thank you for the flowers, but I have never deleted a post from you.  Why do you think so?

    Maybe you were drunk at the time and can't remember? https://github.com/armbian/build/commit/7f1c5b19cd58f100218c0175f9e81df1b5276003#commitcomment-33848416


    Moving posts to threads that are inaccessible is the same as deletion (but maybe you're not able to get this). Apropos deletion. Asides that you have not the slightest idea what you babble here about (totally missing the context)

    You claim 'it is written in the internet or in one of TKs posts, ahh wait, he deleted everything here'. Care to elaborate what you mean? Here is the list of my 5432 posts so far: https://forum.armbian.com/profile/7-tkaiser/content/ -- you almost singlehandedly stopped me from posting more in this forum since it makes absolutely no sense to post in a place where a dumbass with moderator privileges deletes posts (and either doesn't get what he does or simply lies).


    In case you want to censor again be aware that this post is already archived: https://archive.fo/8eMcV

  11. 23 hours ago, lanefu said:

    I feel like you missed the part where I said that netdata is awesome


    Nope. Netdata is awesome. All I tried to explain is why 'armbianmonitor -r' was an attempt to generate insights about SBC behavior 3 years ago and why netdata is not sufficient for this purpose. Once you look at results the data collection approach completely changes system behavior --> useless for this use case.


    21 minutes ago, ktsaou said:

    If you need help to configure netdata properly for weak IoT devices, I would be glad to help


    IMO you should take care of cpufreq scaling on this class of devices and if netdata should generate insights and not just fancy graphs you might want to explore EAS.

  12. On 2/24/2019 at 6:36 AM, esbeeb said:

    @tkaiser, you're technical expertise is clearly very well developed, and is an asset to this community, but some diplomacy and gentle speech will go a long way towards everyone getting along on this forum


    Oh, "this forum"...


    This forum is pretty much irrelevant for what's important. I pushed contents into this forum for over 3 consecutive years trying to attract foreign readers/developers to these contents and get interested in Armbian to get broader adoption and relevance.


    My goal was to strengthen a small project (back in 2015) to become relevant since my needs are a stable OS distribution on ARM (I'm a server guy, I'm not interested in fancy shit but stable operation). Unfortunately to no avail. In theory both fancy shit and stable operation are possible at the same time but that's not how it works here.


    Armbian is still in playground mode. And it won't change anytime soon or at all. If the 'project lead' now even thinks about sabotaging Debian's packaging all is lost. There's no 'checks and balances' into place compared to serious software projects if one person simply can decide to do whatever crazy idea strikes his head.


    It's a problem of ignorance and you can't argue against it if the affected person simply doesn't give a shit. Look at 

    Countless times developers tried to escalate those old and boring problems in a polite way. What happened? It got ignored. In the end this is a single person's project the way it's set up since while all contributing developers always tried to achieve a consensus and conform to (non-existent) rules Igor simply does what he thinks would be the best idea at the very moment. While complaining being overwhelmed he even invests time to make things worse (see the absolutely useless last efforts to change kernel versions for XU4 platform). I'm tired of cleaning up since I can spend my time on more important things.


    It's not about which OS base to choose but to understand that a project needs rules and defined goals at least if it want to leave playground area and become the basis for 'stable operation'. Unfortunately this is not possible with Armbian. After wasting several days of my life for discussions here with always the same result (Igor doing what he wants to do without communication or even feeling bound to a 'consensus' reached before) it's time to stop.


    @Tido move my post to the bin as usual!

  13. On 3/5/2019 at 9:51 PM, NicoD said:

    What do you think about the rest?


    I've a background in graphic design so I'm pretty sure you don't want to hear the answer. Small hint: it reminds of the 'golden age' of DTP 30 years ago.


    3 hours ago, NicoD said:

    It's up to Igor to decide


    This pretty much sums up what Armbian is.

  14. 17 minutes ago, dispo said:

    it felt like the version info in the banner was useful info when connecting to a box


    Well, why providing correct information if the main goal is just to print some fancy stuff on the screen? The whole motd (login greeting) stuff is broken since ever, in the past it delayed login by insane amounts of time, now it simply displays wrong information but as usual one person doesn't give a shit: https://github.com/armbian/build/pull/1129 (how to deal with years of ignorance? Better stay away from such a waste of time).

  15. 2 hours ago, lanefu said:

    I was wondering if there would be interest in replacing the armbianmonitor -r rpimontor  with it


    Only if you love monitoring mistake N°1: your monitoring is that heavy that it affects the way your system behaves. The purpose of RPi Monitor is to explore system behavior, e.g. adjust tunables to get sufficient ondemand governor behavior (which is broken on several platforms now but literally no one cares since all remaining devs are busy adding new devices and fancy features). If your monitoring is that heavy that your system will constantly clock at the upper speeds how would you be able to draw reasonable conclusions?


    Just like you should benchmark every benchmark you're using you should monitor your monitoring solution of choice (quite simple with armbianmonitor -m). Checking out netdata 3 years ago led to the above conclusions when testing on weak SBC.


    Also SBC stuff like CPU temperature and cpufreq scaling is missing. Netdata will show you CPU utilization only since it's meant for servers that will run on highest clockspeeds all the time. Which SBC is more busy: the one reporting 10% CPU utilization clocking at 1200 MHz or the one reporting 20% remaining at 480 MHz. Netadata's output is useless on systems with cpufreq scaling. It's only great for servers and for operators who know what they're doing. As such it should never be too easy to install it.


    For those people interested. You can play around with it on ODROID bench: https://forum.odroid.com/viewtopic.php?f=29&amp;t=32257#p246987 (please keep in mind that the four instances are S922X installations and this SoC is as capable as Intel Atom designs. Far more capable than the average SBC Armbian supports)

  16. 3 hours ago, Stuart Naylor said:

    Chrome OS & Android use a single device and if you think about it

    Why do you ignore the answers you get? I explained that Armbian is not supposed to run only on one device but on many. And some of them still run with kernel 3.4. As such do we use up to 4 zram devices. More zram devices than 1 is needed on old kernels and doesn't matter on newer kernels (tested various times, simply search for zram).

  17. 3 hours ago, Stuart Naylor said:

     I have to ask if a zram device can accept multiple streams then why are we creating a device per core?

    You miss that Armbian still supports devices running with a 3.4 kernel. And obviously you also miss all of the research already done (use forum search) and how Armbian's implementation currently works (for example choosing the most efficient algo automagically depending on kernel capabilities for the log2ram partition)

  18. 2 hours ago, NicoD said:

    I don't believe in benchmaking, I've showed most/all benchmarking is off. But still I need to do it, and I hate it.


    Benchmarking is great! You simply fire up a bunch of tests in uncontrolled manner without monitoring what's happening, then generate bar charts out of the generated numbers and can then show that your devices are the fastest SBC around (even faster than those expensive NVIDIA Jetson thingies!!1!1!!):




    (full results)


    Only problem: these bar charts above create the impression ODROID-N1 would be faster than N2 which is something Hardkernel clearly wants to avoid. Last year they cancelled their RK3399 based N1 for three reasons (two of them their customers don't want to hear) so they need to choose a bunch of benchmarks where N2 looks like an improvement compared to N1 and this pretty much explains why they only chose multi-threaded CPU benchmarks since with single-core stuff RK3399 usually is as fast as S922X or even slightly faster (this whole CPU benchmarking crap boils down to exactly this: S922X wins over RK3399 with multi-threaded loads while single-core stuff is usually faster on RK3399. Whether this is important or not always depends on the use case only. Majority of users staring at CPU benchmark charts simply don't understand that the relevant stuff happens somewhere else than stupid multi-core '100% CPU utilization' tests)


    So how to benchmark properly: if you're coming from the developer/researcher perspective then you need Active Benchmarking. All this kitchen-sink stuff is pretty useless. And then you do not benchmark to show how product A is faster than B but to get B as fast as A or even faster.


    From a user or consumer point of view it's always 'use case first'. Let's take your 'Blender' test here since this is also a real use case you're interested in (rendering stuff on slow SBC for whatever reasons). I mentioned Blender using SIMD Extensions only on x86 for a reason: to illustrate that if you're not a developer able to code and familiar with NEON2 on ARM you might be better off looking at x86 instead. The Gemini Lake thing on ODROID H2 for example is not equipped with Intel's latest and greatest extensions like AVX but at least fully supports SSE2 so maybe @rooted is so kind providing you with Blender numbers from H2? Maybe then your excitement for N2 is already gone and you're a future H2 buyer?


    Your stuff (a rather special application making use of special instructions) is an exception so how should benchmarks that test entirely different stuff show what's going on? You can't appropriately benchmark without knowledge and without being focused on the use case. Otherwise you're all the time just collecting numbers without meaning.


    Here I won't comment on why further contributing to Armbian (or using this forum) doesn't make that much sense for me but you should be aware that https://www.cnx-software.com is a great source of knowledge (especially in the comments section where insiders share details and experts explain so much stuff like how/why A72 and A73 differ and so on)




  19. 1 hour ago, Igor said:

    If installing packages from https://packages.debian.org/buster/proftpd-basic (+ this app dependencies) solves this problem, we can put them to our repository.




    2 hours ago, esbeeb said:

    Note the current version in Armbian is still 1.3.5b.


    Nope, Armbian has no such package. Armbian is just a build system producing OS images based on some Debian or Ubuntu variants even if @Igor constantly tries to hide this by advertising Armbian as 'the best OS for SBC'. You're dealing with plain Debian here and if you found a bug you should report it upstream there. If you want a more recent Debian package version you should search backports repo first and if there's nothing try it with apt pinning.


    @Igorundermining stability of the package system in general by pulling in Buster packages into a Stretch repo is... I miss words... I mean what's the reason to rely on Debian's package system? What's the reason to rely on 'everything outdated as hell' AKA Debian in the first place? Isn't it the promise of 'stable' and the benefits of relying on a strong team of maintainers? It really looks like Armbian is still moving into a direction where anyone interested in STABLE operation ist lost :( 

  20. 13 hours ago, NicoD said:

    The better result of the RockPi4B could be because of this. So then the default CONFIG_HZ could have changed between 2 - 4 months for Bionic(guess).
    I'll know when I'm finished with the RockPi4B


    No, you do NOT know when you're finished firing up your next round of passive benchmarking tests spitting out some numbers. You need to know what's going on to generate answers to the question 'why is A faster than B' and also 'what is the limiter on A and why can't A not be twice as fast'? You missed this change and attributed faster RockPi scores to DDR4 vs. DDR3 memory while in reality you compared old Armbian kernel config with newer one. This whole passive benchmarking approach (and IMO 'technical' Youtube videos in general) always only contributes to confusion and generates zero insights.


    The general rule of passive benchmarking and what in fact happened here is: you benchmark A, but actually measure B, and conclude you've measured C (what's needed instead is Active Benchmarking)


    What matters are insights, settings and software. And this not only applies to RK3399 but to N2/S922X too of course. So with your use case that utilizes all CPU cores in parallel you surely should go with S922X (since being definitely faster than RK3399 with everything that's multi-threaded), then use a kernel with CONFIG_HZ=100 and an optimized software stack. With Gentoo for example, most recent GCC, optimal/aggressive compiler flags for the A53/A73 combo and a CONFIG_HZ=100 kernel on the N2 your Blender job might finish in less than 40 minutes (maybe just 20 or even less if a programmer equipped with knowledge starts to look into Blender code and adds NEON optimizations to the performance critical code parts -- see here for a great example of Active Benchmarking and adding NEON optimizations on ARM to a software that utilizes SIMD Extensions only on x86 so far just like SSE2 optimized Blender does)


    The price you'll pay is a few weeks of your life needed to become a Linux expert since there's nothing ready. Choosing Armbian is usually a matter of convenience but if you want software that performs as fast as possible it's a problematic choice since the software stack is not meant to be performant but to be compatible and stable (funny joke since Armbian's own approach with kernel/bootloader updates is the opposite). The two distro variants Armbian provides are

    • Ubuntu also known as 'everything outdated'
    • Debian also known as 'everything outdated as hell'

    (there are areas where Armbian is in fact faster than other Ubuntu/Debian based images or distros using a more modern software stack like Gentoo or Fedora but this is due to what separates Armbian from the stock Debian/Ubuntu stuff: settings like CPU/IRQ affinity, thermal/DVFS tuning, zram, log2ram and such stuff. But two guys who mostly took care of this contributed nothing to Armbian for a longer time now and no idea how Armbian evolves here in the future. On EoL platforms like Allwinner H5 IRQ affinity is already broken but nobody cares)


  21. 26 minutes ago, NicoD said:

    just done my first Blender test with the M4. 1h08m23s. +5 minutes faster than my previous tests. In the margines of your test. (1 minute difference is no difference...)


    Another thing to consider: SoC temperature. Even if no throttling happens higher temperature will result in both slightly lower performance and slightly higher consumption (details). With a task running over an hour SoC temperature 20°C vs. 70°C might result in ~1 minute difference (maybe even more -- at least that's another reason why sbc-bench always does temperature monitoring and this also explains why N2 benchmarks currently running on "ODROID bench" in a container but with active cooling seem to be faster compared to running bare metal with passive cooling only at higher temperatures).


  • Create New...