tkaiser

  • Posts

    5445
  • Joined

Reputation Activity

  1. Like
    tkaiser got a reaction from chwe in zram vs swap   
    I started with this 'zram on SBC' journey more than 2 years ago, testing with GUI use cases on PineBook, searching for other use cases that require huge amounts of memory, testing with old as well as brand new kernel versions and ending up with huge compile jobs as an example where heavy DRAM overcommitment is possible and zram shows its strengths. Days of work, zero help/contributions by others until recently (see @botfap contribution in the other thread). Now that as an result of this work a new default is set additional time is needed to discuss about feelings and believes? Really impressive...
     
     
    Care to elaborate what I did wrong when always running exactly the same set of 'monitoring' with each test (using a pretty lightweight 'iostat 1800' call which simply queries the kernel's counters and displays some numbers every 30 minutes)?
     
     
    Why should opinions matter if there's no reasoning provided? I'm happy to learn how and what I could test/modify again since when starting with this zram journey and GUI apps I had no way to measure different settings since everything is just 'feeling' (with zram and massive overcommitment you can open 10 more browsers tabs without the system becoming unresponsive which is not news anyway but simply as expected). So I ended up with one huge compile job as worst case test scenario.
     
    I'm happy to learn in which situations with zram only a vm.swappiness value higher than 60 results in lower performance or problems. We're talking about Armbian's new defaults: that's zram only without any other swap file mechanism on physical storage active. If users want to add additional swap space they're responsible for tuning their system on their own (and hopefully know about zswap which seems to me the way better alternative in such scenarios) so now it's really just about 'zram only'.
     
    I'm not interested in 'everyone will tell you' stories or 'in theory this should happen' but real experiences. See the reason why we switched back to lzo as default also for zram even if everyone on the Internet tells you that would be stupid and lz4 always the better option.
  2. Like
    tkaiser reacted to 5kft in btrfs root images are currently broken...?   
    Yes, that change introduced the problem...  I just pushed a fix:  https://github.com/armbian/build/commit/c1530db4820320a1e837901196d846b0ef71ee4c
  3. Like
    tkaiser got a reaction from Spemerchina in NanoPi M4 performance and consumption review   
    Really looking forward to this HAT
     
    BTW: I've not the slightest idea how much efforts and initial development costs are needed for such a HAT. But in case the Marvell based 4-port SATA solution will be somewhat expensive maybe designing another one with an ASMedia ASM1062 (PCIe x2 and 2 x SATA) and just one Molex for 2 drives might be an idea. Could be one design made for NEO4 that will work on M4 too with a larger (or dual-purpose) HAT PCB?
  4. Like
    tkaiser reacted to hjc in NanoPi NEO4   
    Now FriendlyARM has a wiki page for NEO4.
  5. Like
    tkaiser reacted to mindee in NanoPi M4 performance and consumption review   
    Thanks for your suggestion, we made a SATA HAT prototype for NanoPi M4, it can connect  with 4x 3.5inch hard drive and work well.
     
     
  6. Like
    tkaiser got a reaction from sfx2000 in zram vs swap   
    https://lists.debian.org/debian-kernel/2017/04/msg00333.html
  7. Like
    tkaiser got a reaction from gounthar in (Serial) console access via 'USB-UART/Gadget mode' on Linux/Windows/OSX   
    Well, if this article is meant to be something for an average Armbian user you should IMO elaborate a bit on what a serial console is and how it's possible over an USB cable. Also not looking at this from the Windows perspective (ignoring +90% of our users?) makes the whole attempt more or less useless. And then I don't understand how you built the list of affected boards (since those the feature has been implemented for are all missing: the inexpensive and headless H2+/H3 boards)
  8. Like
    tkaiser reacted to hjc in (Serial) console access via 'USB-UART/Gadget mode' on Linux/Windows/OSX   
    It works great on Windows Server 2016 (so probably Windows 10, too). Now I'm playing with my NanoPi on the desktop workstation in my office.
     

     
    It's easy to use, run devmgmt.msc, find the serial device, then go with putty.
  9. Like
    tkaiser got a reaction from Werner in Banana PI BPI-W2   
    Anyone interested in RTD1295/RTD1296 platform would need to do something like this first
    check out upstream mainline kernel at version 4.9.119 import changes from RealTek's kernel above (you then end up with one giant commit like this -- context) Now you can spend some days or weeks to have a look what's different, where security holes exist (remember Allwinner's 'rootmydevice'?) and what's not properly licensed If this step is skipped there exists no reason to trust into RealTek's 4.9 at all. Especially if this step is skipped it's impossible to start with Armbian support for RTD1295/RTD1295 devices since it would harm the project repeating mistakes like with Allwinner's H3 BSP kernel again (where something like rootmydevice was only discovered by accident).
     
    We can't trust Android vendor's BSP kernels since those employees forced to hack driver support into something they're not interested in (the Linux kernel and its concepts) never worked with mechanisms like peer review or code quality control. If those employees would've started to submit patches upstream where all this stuff happens (Linux kernel community) then this would be something entirely different. But RealTek just like Allwinner and the majority of ARM SoC vendors has no interest in properly upstreaming their code so this code can't be trusted.
     
    If their stuff is not properly licensed this will most likely prevent developing mainline drivers for the affected hardware (but I doubt anyone will do this anyway since so far only Suse's Andreas Färber worked on the platform -- tried to inform him via CNX but no idea whether he's aware that some RealTek kernel sources are now provided).
     
    Having now sources does not mean everything's fine now. Next step is looking at the mess contained (if anyone wants to spend weeks of his life with something like this).
     
     
  10. Like
    tkaiser reacted to martinayotte in NanoPC T4   
    No, first time, it was booting some other unknown image already present if I remember.
    Later, I had a build done from @hjc old branch, and when I wanted to use latest Armbian, I had to press "boot" again this time.
  11. Like
    tkaiser got a reaction from Tido in Banana PI BPI-W2   
    Of course it does NOT work which is a problem since BPi folks distribute all their software solely through their unprotected forum:
     
    http://www.banana-pi.org/download.html (no HTTPS) --> http://www.banana-pi.org/downloadall.html (no HTTPS) --> http://forum.banana-pi.org/c/Banana-pi-BPI-W2/BPI-W2-image (no HTTPS)
     
    For any serious or professional use cases it's impossible to continue since a MITM (man-in-the-middle) attacker would download an image from them, open/modify it adding some nice backdoors, then uploading it somewhere on Google Drive, setting up something that looks like a download page and then doing some DNS spoofing. Not very likely but software downloads in general that are not protected by at least HTTPS aren't trustworthy at all. Any potential professional customer like @botfap will immediately turn away when seeing stuff like this.
     
    This was just another free community service for our beloved Banana folks which of course will result in @Lion Wang talking about some evil individual (me) constantly attacking him. 
  12. Like
    tkaiser got a reaction from manuti in Benchmarking CPUs   
    ...is crap. It's not a CPU but only a compiler benchmark. You chose the worst 'benchmark' possible.
     
    Sysbench numbers change with compiler version and settings or even with sysbench version (higher version number --> lower scores). There's no 'benchmark' known producing more unreliable results wrt hardware/CPU. Use 'Google site search' here to get the details.
     
    If it's about a quick and rough CPU performance estimate I would recommend 7-zip's benchmark mode (7z b). 
  13. Like
    tkaiser got a reaction from _r9 in Benchmarking CPUs   
    For people not rejecting reality... again why sysbench is unrealiable (not able to indicate CPU performance AT ALL).
    Sysbench not able to compare different CPU architectures since only a compiler benchmark (you get 15 times higher 'CPU performance' reported with a 64-bit distro than a 32-bit distro on 64-bit ARM boards) Switch the distro and your sysbench numbers differ (in fact it's just different distros building their packages with different GCC versions) Update your distro and your sysbench numbers improve (since it's just a compiler benchmark) Different sysbench version, different 'benchmark' results (start at 'Performance with legacy Armbian image') Why sysbench's method to 'calculate CPU performance' is that weird and does not apply to anything performance relevant in the real world For people loving to 'shoot the messenger'... it's not only me who describes sysbench as the totally wrong tool, e.g. https://www.raspberrypi.org/forums/viewtopic.php?t=208314&start=25
     
    Again: 7-zip's benchmark mode is not just using an insanely limited synthetic benchmark routine like sysbench (calculating prime numbers only involving CPU caches but no memory access), 7-zip is not dependent on compiler versions or platform switches and 7-zip allows for cross-platform comparisons. You'll find a lot of numbers here in the forum and some comparisons on the net e.g. https://s1.hoffart.de/7zip-bench/ (again: it's just a rough estimate but at least something somewhat reliable related to CPU performance)
     
    The most important thing with benchmarking is 'benchmarking the benchmark'. Since most tools (especially the popular ones) do not do what they pretend to do. Active benchmarking is needed and not just throwing out numbers without meaning.
     
    BTW: sysbench is part of MySQL and when used correctly a great tool to provide insights. It's just the 'cpu test' that is not a CPU test at all. And it's about people firing up sysbench in 'passive benchmarking' mode generating numbers without meaning and praising insufficient tools.
  14. Like
    tkaiser got a reaction from Moklev in [DEPRECATED] zRAM in Armbian Stretch (mainline 4.14.yy)   
    Wrong way since we implemented an own zram-control mechanism in the meantime already available in nightlies and supposed to be rolled out with next major update.
     
    For anyone coming across this: do NOT follow the above recipe, it's deprecated in Armbian and causes more harm than any good.
     
    @Igor: IMO we need to make the amount of zram configurable. Currently it's set as 'half of available RAM' in line 35 of the initialization routine. But since updates will overwrite this users who want to benefit from massive zram overcommitment (since it just works brilliant) are forced to edit this script over and over again.
     
    I propose to define the amount used as $ZRAM_PERCENTAGE that defaults to 50 and can be defined in a yet not created /etc/defaults/armbian-zram-config file. Any opinions?
  15. Like
    tkaiser got a reaction from Igor in zram vs swap   
    https://lists.debian.org/debian-kernel/2017/04/msg00333.html
  16. Like
    tkaiser got a reaction from esbeeb in Looking for an enclosure for espressobin   
    And guess what: I have a huge box here labeled 'PC-Schraddel' (PC junk), just checked it for those cables and to my surprise I found in there my EspressoBin and also the right cable with 2 female Molex jacks:

  17. Like
    tkaiser got a reaction from esbeeb in Looking for an enclosure for espressobin   
    Nice. How did you solve the problem of the stupid Molex male power connector? And do you sell these things?
  18. Like
    tkaiser got a reaction from Dwyt in Learning from DietPi!   
    I would call the price/performance not good but simply awesome today given we get ultra performant cards like the 32GB SanDisk Ultra A1 for as low as 12 bucks currently: https://www.amazon.com/Sandisk-Ultra-Micro-UHS-I-Adapter/dp/B073JWXGNT/ (I got mine 2 weeks ago for 13€ at a local shop though). And the A1 logo is important since cards compliant to A1 performance class perform magnitudes faster with random IO and small blocksizes (which pretty much describes the majority of IO happening with Linux on our boards).
     
    As can be seen in my '2018 A1 SD card performance update' the random IO performance at small blocksizes is magnitudes better compared to an average/old/slow/bad SD card with low capacity:
    average 4GB card SanDisk Ultra A1 1K read 1854 3171 4K read 1595 2791 16K read 603 1777 1K write 32 456 4K write 35 843 16K write 2 548 With pretty common writes at 4K block size the A1 SanDisk shows 843 vs. 35 IOPS (IO operations per second) and with 16K writes it's 548 vs. 2 IOPS. So that's over 20 or even 250 times faster (I don't know the reason but so far all average SD cards I tested with up to 8 GB capacity show this same weird 16KB random write bottleneck -- even those normal SanDisk Ultra with just 8GB). This might be one of the reasons why 'common knowledge' amongst SBC users seems to be trying to prevent writing to SD card at all. Since the majority doesn't take care which SD cards they use, test them wrongly (looking at irrelevant sequential transfer speeds instead of random IO and IOPS) and chose therefore pretty crappy ones.
     
    BTW: the smallest A1 rated cards available start with 16GB capacity. But for obvious reasons I would better buy those with 32GB or even 64GB: price/performance ratio is much better and it should be common knowledge that buying larger cards 'than needed' leads to SD cards wearing out later.
     
  19. Like
    tkaiser reacted to botfap in TonysMac's kitchen corner   
    I feel left out but it’s 2:30am and the fanciest action in my kitchen is a Molotov cocktail 
     

  20. Like
    tkaiser got a reaction from TonyMac32 in TonysMac's kitchen corner   
    Sorry, 'recipes are just an inspiration'   I only follow them when it's about making dough. And then I totally lack English kitchen vocabulary (only fluent in German, Italian and a bit French in this area) and it won't get better than http://kaiser-edv.de/tmp/senigallia/Sabato.html anyway...
  21. Like
    tkaiser got a reaction from chwe in TonysMac's kitchen corner   
    Just had a look when I did Knöpfle the last time: over 11 years ago. Delicious taste, bad 'look&feel'. Never did them again
     

     
     
     
  22. Like
    tkaiser reacted to lanefu in Initial easy setup proposal   
    Current proposal?
     
    the FAT32 config partition is at the end of the disk image firstrun applies configuration changes config file is scrubbed via shred for security purposes fat32 part is destroyed rootfs expansion step proceeds as normal
  23. Like
    tkaiser reacted to hjc in NanoPi M4 performance and consumption review   
    Not yet, recently I'm trying to use M4 (with mainline kernel) as a network router & gateway (connect 2-3 RTL8153, set up VLAN, routing, NAT, and site to site VPN), and still trying to resolve some USB related issues. Currently with mainline kernel the USB hub must be manually reset (USBDEVFS_RESET) after reboot, or it wouldn't be usable.
  24. Like
    tkaiser got a reaction from Werner in Initial easy setup proposal   
    Absolutely support this. But IMO discussion should be moved out of this thread in an own thread over at https://forum.armbian.com/forum/12-armbian-build-framework/
     
    @Igor can you please move last two posts to something like an 'Initial easy setup proposal' thread? I think a lot of users might benefit from a way to easily adjust the settings @botfap talked about prior to first boot.
  25. Like
    tkaiser got a reaction from gounthar in NanoPi M4 performance and consumption review   
    Yesterday 2 sets of NanoPi M4 arrived (one 2GB and 4GB) which will make it convenient to test for stuff like heat dissipation in less time (I use one time the supplied thermal pad with the large heatsink, on the 2nd board I replaced the thermal pad with a 20x20mm copper shim of 1mm height I took away from my RockPro64).
     
    But now talking just about USB-C: NanoPi M4 while using an USB-C connector to be powered is obviously not USB PD compliant but simply uses the USB-C connector to be combined with a 'dumb' PSU that provides the necessary amount of current without any negotiation. FriendlyELEC shipped a new 5V/4A PSU with USB-A receptacle combined with a very short USB-C cable. Since the PSU is not EU style and I miss the necessary adapter I currently use my standard 'dumb'  5V/2A USB PSU which works fine for the tests I currently do.
     
    Not being compliant to USB PD specs (or USB-C powering specifications) means two things: Pro: no expensive USB-C charger needed (check the funny discussion in Khadas forum and there the link to tested USB-C chargers and their prices). A dumb 5V PSU that can deliver the needed amount of current at a stable voltage will be sufficient
     
    Cons:
    possible voltage drop N° 1: at 5V with higher currents the majority of cheap PSUs won't deliver. They can do either 5V or 4A but not both at the same time (this issue can be studied around ODROID boards that are also designed with 5V/4A in mind) possible voltage drop N° 2: the cable quality and length between board and PSU becomes an issue since here an additional voltage drop happens. The longer the cable and the thinner the wires the less the voltage available to the board (symptoms as above: especially USB disks or peripherals in general will be affected) possible undercurrent: I don't know yet but in case the NanoPi M4 really does no USB PD power signalling at all stuff like with Khadas Vim2 might happen Not possible to use 'real' or standards compliant USB-C PSUs since not providing enough current since the powered device is not able to negotiate higher current demand  
    Edit: tested with my MacBook Pro USB-C charger not exceeding 500mA with 'dumb' consumers on the other end of the USB-C cable: endless boot/crash cycle. So be prepared to run into troubles combining NanoPi M4 with a real USB-C charger compliant to specs. This didn't work with the Vim2 and it also does not work with NanoPi M4. @hjc: seems you can consider yourself lucky that your USB-C charger is outputting more than 500/900 mA when feeding 'dumb' consumers.