Jump to content

tkaiser

Members
  • Posts

    5462
  • Joined

Reputation Activity

  1. Like
    tkaiser reacted to iamwithstupid in Wifi Performance Benchmark test   
    I'd say if anything the evolution of WiFi should tell us that bigger != better and more power != better reception and longer support-time != better wifi.
     
    I've experimented with that topic A LOT with many different devices and routers - expensive and cheap ones.
     
    Almost all RTL8811au and RTL8812au dongles just work well  (there are obviously some black sheeps that overheat, and some that have a random MAC-Adresses assigned, but they're in a minority and can be dealed with) - even if the drivers have hiccups and are all over the place. The biggest issue is a lack of official support (especially regarding upstream support) requiring all kind of user-patches - who would wonder.  I found the driver-variant from aircrack-ng to be the most stable (I think that's the one Hardkernel also uses): https://github.com/aircrack-ng/rtl8812au. Realtek is at it again, telling us their own drivers to be a code-mess rewriting them ... supplying ... tadaaaa ... nothing as an replacement in the meanwhile ...
     
    https://github.com/torvalds/linux/tree/master/drivers/net/wireless/realtek/rtlwifi
     
    A year later ... we're still not there. Oh well! At least the RTL8812ae PCIe variant is already supported now. Hopefully it will get upstream support soon (tm).
     
    Btw. If you're seeking small, inexpensive 5GHZ AC-WAPs (or Routers) on the other hand I can only recommend the Xiaomi Wifi 3G 2018 (with GBe, not 2017!) with Padavan Firmware ...
     
    They're worth their money twofold over every other expensive Consumer-Routers I've owned before (Several Asus ACXXAU, TP-Link, Netgear Variants ...).
     
    The thing is they don't even offer MU-MIMO, but that's fine - the bottlenecks for a lot of the consumer routers above will be their horrendus power-design, airflow and software.
     
    They're overcramped with components that increase the heat and decrease performance, lifetime and reliability.
     
    If you really need MU-MIMO then get something like an UniFi UAP-AC-PRO (usually for a flat with 100 feets that's simply a waste of money).
     
    I'd say give it a few months / years and we might have upstream support (hopefully).
     
    -----
     
    TLDR: RTL8811au and RTL8812au are the choice to go for SBCs if you need Wireless nowadays.
     
    You'll still benefit from 2 Antennas without MU-MIMO because one is being used to send and one to recieve. A single antenna can just do one thing at once. 2x MU-MIMO only means 2 users can simultaniously send/recieve on 2 Antennas at once.
  2. Like
    tkaiser got a reaction from sfx2000 in eMMC vs Usb Flash on a Rock64   
    Depends. IMO the vast majority of problems with suddenly dying flash media (SD cards or USB pendrives) is related to fraud flash: flash memory products that fake their capacity so all of a sudden they stop working once the total amount of data written to exceeds the drive's real capacity (see here for a tool to check for this).
     
    If you manage to buy genuine flash products (not the brands matter but where you buy -- with eBay, Aliexpress & Co. chances to get fake flash are most probably well above 50%) then there are still huge differences. Pine's cheap FORESEE eMMC modules with low capacity are way slower than the Samsung or SanDisk (Pine and other SBC vendors use for their higher capacity modules). But no idea about reliability since AFAIK all you can do here is to trust and believe since without extensive testing it's impossible to predict longevity of SD cards, eMMC and USB flash storage.
     
    My personal take on this is
    trying to minimize writes to flash storage ('Write Amplifcation' matters here, keeping stuff like logs in RAM and only write them once per hour and so on) When using low-end flash storage preferring media that supports TRIM (that's SD cards and most probably also eMMC supporting ERASE CMD38 -- you can then TRIM them manually from time to time and maybe we get filesystem drivers in Linux that will support TRIM on (e)MMC too in the future) Weighing eMMC with A1 rated SD cards If huge amounts of important data need to be written to the media then always using SSDs that can be queried with SMART. The better ones provide a SMART attribute that indicates how much the storage is already worn out Some more info:
    https://docs.armbian.com/User-Guide_Getting-Started/#how-to-prepare-a-sd-card https://forum.armbian.com/topic/6444-varlog-file-fills-up-to-100-using-pihole/?do=findComment&comment=50833 https://forum.openmediavault.org/index.php/Thread/24128-Asking-for-a-opinion/?postID=182858#post182858
  3. Like
    tkaiser got a reaction from iamwithstupid in Adequate power supply for the Orange Pi Zero?   
    Powered via Micro USB from an USB port of the router next to it (somewhat recent Fritzbox). Storage was a 128 GB SD card, no peripherals, cpufreq settings limited the SoC to 1.1V (fixed 912 MHz) and so maximum consumption was predictable anyway (way below 3W). I could've allowed the SoC to clock up to 1.2GHz at 1.3V but performance with this specific use case (torrent server) was slightly better with fixed 912 MHz compared to letting the SoC switch between 240 MHz and 1200 MHz as usual.
     
    I used one of my 20AWG rated Micro USB cables but honestly when it's known that the board won't exceed 3W anyway every Micro USB cable is sufficient.
  4. Like
    tkaiser reacted to mindee in NanoPi M4 performance and consumption review   
    I think the manufacturing cost is not so high, I am not sure for that now. The NEO4 use a differencet PCIe connector, it’s not a easy thing to fit both with one HAT, considering the signal quality.
  5. Like
    tkaiser got a reaction from manuti in Librecomputer Renegade RK3328   
    Care to provide numbers from a quick iozone benchmark as described here: https://forum.armbian.com/topic/954-sd-card-performance/?
     
    I tested some 'quality' pendrives last year and was highly disappointed about the performance after some time or usage. Nice performance when starting to use them but after some time or amounts of write performance really started to suck. The only 'pendrive' working flawlessly so far looks like this for me:

     

     
    A real M.2 SATA SSD with heatsinks to prevent overheating/throttling on a JMS578 adapter. Without heatsink and with enclosure closed also unusable since the SSD overheats too much (sequential transfers then drop from 400 MB/s to ~30 MB/s)
     
     
     
    Why should everybody have the same needs? Users might want to attach a HDD to the USB3 port then 'booting from USB3' is not an option any more as long as you want the HDD to spin down. Putting an USB hub between host and disk is always a great idea to introduce problems.
  6. Like
    tkaiser got a reaction from iamwithstupid in NanoPi Duo2 and NanoPi Hero are coming   
    By comparing https://github.com/friendlyarm/linux/blob/sunxi-4.14.y/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts and https://github.com/friendlyarm/linux/blob/sunxi-4.14.y/arch/arm/boot/dts/sun8i-h2-plus-nanopi-duo.dts the only real difference is replacing the crappy XR819 Wi-Fi with RTL8189 (the other change being H2+ being replaced by H3)
     
    NanoPi Hero also features RTL8189 Wi-Fi, is limited to Fast Ethernet and has an I2C accessible voltage regulator allowing the board to clock at up to 1368 MHz (at 1.4V VDD_CPUX): https://github.com/friendlyarm/linux/blob/sunxi-4.14.y/arch/arm/boot/dts/sun8i-h3-nanopi-hero.dts
     
    Maybe @mindee is so kind to provide an early picture of the latter?
  7. Like
    tkaiser got a reaction from chwe in zram vs swap   
    I started with this 'zram on SBC' journey more than 2 years ago, testing with GUI use cases on PineBook, searching for other use cases that require huge amounts of memory, testing with old as well as brand new kernel versions and ending up with huge compile jobs as an example where heavy DRAM overcommitment is possible and zram shows its strengths. Days of work, zero help/contributions by others until recently (see @botfap contribution in the other thread). Now that as an result of this work a new default is set additional time is needed to discuss about feelings and believes? Really impressive...
     
     
    Care to elaborate what I did wrong when always running exactly the same set of 'monitoring' with each test (using a pretty lightweight 'iostat 1800' call which simply queries the kernel's counters and displays some numbers every 30 minutes)?
     
     
    Why should opinions matter if there's no reasoning provided? I'm happy to learn how and what I could test/modify again since when starting with this zram journey and GUI apps I had no way to measure different settings since everything is just 'feeling' (with zram and massive overcommitment you can open 10 more browsers tabs without the system becoming unresponsive which is not news anyway but simply as expected). So I ended up with one huge compile job as worst case test scenario.
     
    I'm happy to learn in which situations with zram only a vm.swappiness value higher than 60 results in lower performance or problems. We're talking about Armbian's new defaults: that's zram only without any other swap file mechanism on physical storage active. If users want to add additional swap space they're responsible for tuning their system on their own (and hopefully know about zswap which seems to me the way better alternative in such scenarios) so now it's really just about 'zram only'.
     
    I'm not interested in 'everyone will tell you' stories or 'in theory this should happen' but real experiences. See the reason why we switched back to lzo as default also for zram even if everyone on the Internet tells you that would be stupid and lz4 always the better option.
  8. Like
    tkaiser reacted to 5kft in btrfs root images are currently broken...?   
    Yes, that change introduced the problem...  I just pushed a fix:  https://github.com/armbian/build/commit/c1530db4820320a1e837901196d846b0ef71ee4c
  9. Like
    tkaiser got a reaction from Spemerchina in NanoPi M4 performance and consumption review   
    Really looking forward to this HAT
     
    BTW: I've not the slightest idea how much efforts and initial development costs are needed for such a HAT. But in case the Marvell based 4-port SATA solution will be somewhat expensive maybe designing another one with an ASMedia ASM1062 (PCIe x2 and 2 x SATA) and just one Molex for 2 drives might be an idea. Could be one design made for NEO4 that will work on M4 too with a larger (or dual-purpose) HAT PCB?
  10. Like
    tkaiser reacted to hjc in NanoPi NEO4   
    Now FriendlyARM has a wiki page for NEO4.
  11. Like
    tkaiser reacted to mindee in NanoPi M4 performance and consumption review   
    Thanks for your suggestion, we made a SATA HAT prototype for NanoPi M4, it can connect  with 4x 3.5inch hard drive and work well.
     
     
  12. Like
    tkaiser got a reaction from sfx2000 in zram vs swap   
    https://lists.debian.org/debian-kernel/2017/04/msg00333.html
  13. Like
    tkaiser got a reaction from gounthar in (Serial) console access via 'USB-UART/Gadget mode' on Linux/Windows/OSX   
    Well, if this article is meant to be something for an average Armbian user you should IMO elaborate a bit on what a serial console is and how it's possible over an USB cable. Also not looking at this from the Windows perspective (ignoring +90% of our users?) makes the whole attempt more or less useless. And then I don't understand how you built the list of affected boards (since those the feature has been implemented for are all missing: the inexpensive and headless H2+/H3 boards)
  14. Like
    tkaiser reacted to hjc in (Serial) console access via 'USB-UART/Gadget mode' on Linux/Windows/OSX   
    It works great on Windows Server 2016 (so probably Windows 10, too). Now I'm playing with my NanoPi on the desktop workstation in my office.
     

     
    It's easy to use, run devmgmt.msc, find the serial device, then go with putty.
  15. Like
    tkaiser got a reaction from Werner in Banana PI BPI-W2   
    Anyone interested in RTD1295/RTD1296 platform would need to do something like this first
    check out upstream mainline kernel at version 4.9.119 import changes from RealTek's kernel above (you then end up with one giant commit like this -- context) Now you can spend some days or weeks to have a look what's different, where security holes exist (remember Allwinner's 'rootmydevice'?) and what's not properly licensed If this step is skipped there exists no reason to trust into RealTek's 4.9 at all. Especially if this step is skipped it's impossible to start with Armbian support for RTD1295/RTD1295 devices since it would harm the project repeating mistakes like with Allwinner's H3 BSP kernel again (where something like rootmydevice was only discovered by accident).
     
    We can't trust Android vendor's BSP kernels since those employees forced to hack driver support into something they're not interested in (the Linux kernel and its concepts) never worked with mechanisms like peer review or code quality control. If those employees would've started to submit patches upstream where all this stuff happens (Linux kernel community) then this would be something entirely different. But RealTek just like Allwinner and the majority of ARM SoC vendors has no interest in properly upstreaming their code so this code can't be trusted.
     
    If their stuff is not properly licensed this will most likely prevent developing mainline drivers for the affected hardware (but I doubt anyone will do this anyway since so far only Suse's Andreas Färber worked on the platform -- tried to inform him via CNX but no idea whether he's aware that some RealTek kernel sources are now provided).
     
    Having now sources does not mean everything's fine now. Next step is looking at the mess contained (if anyone wants to spend weeks of his life with something like this).
     
     
  16. Like
    tkaiser reacted to martinayotte in NanoPC T4   
    No, first time, it was booting some other unknown image already present if I remember.
    Later, I had a build done from @hjc old branch, and when I wanted to use latest Armbian, I had to press "boot" again this time.
  17. Like
    tkaiser got a reaction from Tido in Banana PI BPI-W2   
    Of course it does NOT work which is a problem since BPi folks distribute all their software solely through their unprotected forum:
     
    http://www.banana-pi.org/download.html (no HTTPS) --> http://www.banana-pi.org/downloadall.html (no HTTPS) --> http://forum.banana-pi.org/c/Banana-pi-BPI-W2/BPI-W2-image (no HTTPS)
     
    For any serious or professional use cases it's impossible to continue since a MITM (man-in-the-middle) attacker would download an image from them, open/modify it adding some nice backdoors, then uploading it somewhere on Google Drive, setting up something that looks like a download page and then doing some DNS spoofing. Not very likely but software downloads in general that are not protected by at least HTTPS aren't trustworthy at all. Any potential professional customer like @botfap will immediately turn away when seeing stuff like this.
     
    This was just another free community service for our beloved Banana folks which of course will result in @Lion Wang talking about some evil individual (me) constantly attacking him. 
  18. Like
    tkaiser got a reaction from manuti in Benchmarking CPUs   
    ...is crap. It's not a CPU but only a compiler benchmark. You chose the worst 'benchmark' possible.
     
    Sysbench numbers change with compiler version and settings or even with sysbench version (higher version number --> lower scores). There's no 'benchmark' known producing more unreliable results wrt hardware/CPU. Use 'Google site search' here to get the details.
     
    If it's about a quick and rough CPU performance estimate I would recommend 7-zip's benchmark mode (7z b). 
  19. Like
    tkaiser got a reaction from _r9 in Benchmarking CPUs   
    For people not rejecting reality... again why sysbench is unrealiable (not able to indicate CPU performance AT ALL).
    Sysbench not able to compare different CPU architectures since only a compiler benchmark (you get 15 times higher 'CPU performance' reported with a 64-bit distro than a 32-bit distro on 64-bit ARM boards) Switch the distro and your sysbench numbers differ (in fact it's just different distros building their packages with different GCC versions) Update your distro and your sysbench numbers improve (since it's just a compiler benchmark) Different sysbench version, different 'benchmark' results (start at 'Performance with legacy Armbian image') Why sysbench's method to 'calculate CPU performance' is that weird and does not apply to anything performance relevant in the real world For people loving to 'shoot the messenger'... it's not only me who describes sysbench as the totally wrong tool, e.g. https://www.raspberrypi.org/forums/viewtopic.php?t=208314&start=25
     
    Again: 7-zip's benchmark mode is not just using an insanely limited synthetic benchmark routine like sysbench (calculating prime numbers only involving CPU caches but no memory access), 7-zip is not dependent on compiler versions or platform switches and 7-zip allows for cross-platform comparisons. You'll find a lot of numbers here in the forum and some comparisons on the net e.g. https://s1.hoffart.de/7zip-bench/ (again: it's just a rough estimate but at least something somewhat reliable related to CPU performance)
     
    The most important thing with benchmarking is 'benchmarking the benchmark'. Since most tools (especially the popular ones) do not do what they pretend to do. Active benchmarking is needed and not just throwing out numbers without meaning.
     
    BTW: sysbench is part of MySQL and when used correctly a great tool to provide insights. It's just the 'cpu test' that is not a CPU test at all. And it's about people firing up sysbench in 'passive benchmarking' mode generating numbers without meaning and praising insufficient tools.
  20. Like
    tkaiser got a reaction from Moklev in [DEPRECATED] zRAM in Armbian Stretch (mainline 4.14.yy)   
    Wrong way since we implemented an own zram-control mechanism in the meantime already available in nightlies and supposed to be rolled out with next major update.
     
    For anyone coming across this: do NOT follow the above recipe, it's deprecated in Armbian and causes more harm than any good.
     
    @Igor: IMO we need to make the amount of zram configurable. Currently it's set as 'half of available RAM' in line 35 of the initialization routine. But since updates will overwrite this users who want to benefit from massive zram overcommitment (since it just works brilliant) are forced to edit this script over and over again.
     
    I propose to define the amount used as $ZRAM_PERCENTAGE that defaults to 50 and can be defined in a yet not created /etc/defaults/armbian-zram-config file. Any opinions?
  21. Like
    tkaiser got a reaction from Igor in zram vs swap   
    https://lists.debian.org/debian-kernel/2017/04/msg00333.html
  22. Like
    tkaiser got a reaction from esbeeb in Looking for an enclosure for espressobin   
    And guess what: I have a huge box here labeled 'PC-Schraddel' (PC junk), just checked it for those cables and to my surprise I found in there my EspressoBin and also the right cable with 2 female Molex jacks:

  23. Like
    tkaiser got a reaction from esbeeb in Looking for an enclosure for espressobin   
    Nice. How did you solve the problem of the stupid Molex male power connector? And do you sell these things?
  24. Like
    tkaiser got a reaction from Dwyt in Learning from DietPi!   
    I would call the price/performance not good but simply awesome today given we get ultra performant cards like the 32GB SanDisk Ultra A1 for as low as 12 bucks currently: https://www.amazon.com/Sandisk-Ultra-Micro-UHS-I-Adapter/dp/B073JWXGNT/ (I got mine 2 weeks ago for 13€ at a local shop though). And the A1 logo is important since cards compliant to A1 performance class perform magnitudes faster with random IO and small blocksizes (which pretty much describes the majority of IO happening with Linux on our boards).
     
    As can be seen in my '2018 A1 SD card performance update' the random IO performance at small blocksizes is magnitudes better compared to an average/old/slow/bad SD card with low capacity:
    average 4GB card SanDisk Ultra A1 1K read 1854 3171 4K read 1595 2791 16K read 603 1777 1K write 32 456 4K write 35 843 16K write 2 548 With pretty common writes at 4K block size the A1 SanDisk shows 843 vs. 35 IOPS (IO operations per second) and with 16K writes it's 548 vs. 2 IOPS. So that's over 20 or even 250 times faster (I don't know the reason but so far all average SD cards I tested with up to 8 GB capacity show this same weird 16KB random write bottleneck -- even those normal SanDisk Ultra with just 8GB). This might be one of the reasons why 'common knowledge' amongst SBC users seems to be trying to prevent writing to SD card at all. Since the majority doesn't take care which SD cards they use, test them wrongly (looking at irrelevant sequential transfer speeds instead of random IO and IOPS) and chose therefore pretty crappy ones.
     
    BTW: the smallest A1 rated cards available start with 16GB capacity. But for obvious reasons I would better buy those with 32GB or even 64GB: price/performance ratio is much better and it should be common knowledge that buying larger cards 'than needed' leads to SD cards wearing out later.
     
  25. Like
    tkaiser reacted to botfap in TonysMac's kitchen corner   
    I feel left out but it’s 2:30am and the fanciest action in my kitchen is a Molotov cocktail 
     

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines