Jump to content

tkaiser

Members
  • Posts

    5462
  • Joined

Reputation Activity

  1. Like
    tkaiser got a reaction from DiscoStewDeluxe in Armbian SD card backup   
    Well, I wrote the instructions above for a reason. When you don't provide the 'bs=' parameter then dd will use its defaults (1 block == 512 bytes) which slows down the cloning process like hell:
    bs=n Set both input and output block size to n bytes, superseding the ibs and obs operands. If no conversion values other than noerror, notrunc or sync are specified, then each input block is copied to the output as a single block without any aggregation of short blocks. Also it's somewhat weird to not compress the image on the fly. So now just as a reference and as preparation for a hopefully improved Armbian documentation soon.
     
    One way to do an offline clone from an Armbian installation with minimum size requirements would be to fill out all the unused space with zeros before (this really helps with compression afterwards if a lot of filesystem activity happened on the SD card before!) and then use a more efficient packer. So given the SD card is /dev/sdd you would do
    mkdir /mnt/clone mount /dev/sdd1 /mnt/clone dd if=/dev/zero of=/mnt/clone/empty.file bs=10M || rm /mnt/clone/empty.file umount /mnt/clone && rmdir /mnt/clone dd if=/dev/sdd bs=10M | gzip -c >/path/to/gzipped.img.gz dd if=/dev/sdd bs=10M | 7zr a -bd -t7z -m0=lzma -mx=9 -mfb=64 -md=32m -ms=on -si /path/to/7zipped.img.7z I did this with a quite normal Armbian desktop image on an 8 GB SD card that looks like this
    /dev/sdd1 7,3G 2,1G 5,2G 29% /mnt/clone Instead of an uncompressed image that is 7.3GB in size the zeroed out 7z is 1.2GB in size and the gzip variant 1.7GB (please note that you could add '-9' then gzip would compress a little bit better).
     
    But here comes a more intelligent approach: Only outlined as a stub since others should do the job and try to earn a REWARD for this:
     
    When there's already a PC running Linux with a somewhat recent kernel where we can insert the SD card to create an offline clone then to combine the best of two worlds would be:
    Create a btrfs filesystem using maximum compression (compress=zlib) Configure keyless SSH authentication between your Linux host and the Armbian installation on the SBC to be able to run rsync later to copy filesystem's contents from the SBC to your Linux host Shut down your SBC, insert the SD card with the Armbian image into the PC, zero the unused space out and create an uncompressed dd image (the image will not be compressed internally since the filesystem does this job. So based on the aforementioned example the image we will create will show as being 7.3GB large but will only need approx. 1.3GB on disk since btrfs provides transparent file compression) Eject the SD card and start your SBC again Now setup a script on the Linux host that uses the btrfs command, losetup and rsync to
    create a snapshot of the btrfs filesystem where the uncompressed image is hosted mount the 1st partition of this image use rsync to incrementally update the OS image on your Linux host with the contents of / on the SBC (/dev/mmcblk0p1) through the network set up a cron job that does this automagically This way a whole bootable OS image only using the minimum amount of needed space will be stored on the Linux host and incrementally be updated. Since btrfs allows for transparent file compression and supports snapshots in case 100 MB have changed between two rsync runs (and given this data is compressable by 1:2) the whole storage requirements to store the new OS image variant will only be 50MB more. Yes, you have two independant OS images using this snapshot and compression technique that require not 8GB + 8 GB but just 1.3GB + 50MB.
     
    If such a script has been set up then we're starting to talk about backup and not stupid cloning any more. Since then we're talking about
    periodically saving the filesystem's contents to a different location implementing versioning (old/deleted/changed stuff remains on the Linux host in the form of older snapshots) allowing also for desaster recovery (since we created a bootable OS image in the first place) And the best news: The first step does also work through the network so in case we want to backup an internal eMMC then we boot the SBC with an SD card and transfer the dd image through the network to the Linux host. Also if you want to save a fleet of Armbian installations then simply also activate deduplication in btrfs and you'll end up with disk storage of below 2 GB to store eg. 8 Armbian desktop installation all being +2 GB in size
     
    Now it's time for someone to write this up as a nice tutorial and start to code this armbian-clone script. And please remember: If you do start contributing to community efforts there's something you'll get in return: http://forum.armbian.com/index.php/topic/1325-claim-a-task-and-do-it/
  2. Like
    tkaiser got a reaction from lanefu in The mmBoard   
    Thank you for providing dev samples already weeks ago. We've been working 24/7 to support this awesome board from day one!
  3. Like
    tkaiser got a reaction from MickMake in The mmBoard   
    Thank you for providing dev samples already weeks ago. We've been working 24/7 to support this awesome board from day one!
  4. Like
    tkaiser reacted to MickMake in The mmBoard   
    Hi guys!
    Thought I'd let you know about a new board that I've just released - called the mmBoard.
    Has all the features that SBC hackers have been wanting, plus more.
     
    https://www.mickmake.com/post/announcing-the-mmboard-a-mickmake-sbc-with-all-the-goodies
     
    Let me know if there's any questions about it. Would be happy to answer them.
     


  5. Like
    tkaiser got a reaction from manuti in Support of Raspberry Pi   
    Just for the record: This (using the OMV RPi image and removing the OMV parts) will not result in an Armbian installation (I'm aware that you've written about 'Armbian a like', just want to further clarify). It's just a Debian Jessie armhf rootfs generated by Armbian's build system that has been slightly adjusted and manually combined with RPi Foundation's kernel packages and the proprietary, closed sourced RPi stuff (ThreadX on the /boot partition and some proprietary userspace binaries to interact with VideoCore/ThreadX, most importantly to report/detect under-voltage).
     
    Armbian is about:
    bootloader and kernel optimizations (first not possible since closed source ThreadX, latter not reasonable/needed on Raspberries) optimized settings to improve performance, consumption and thermal behaviour (not possible on RPi since done entirely on the VideoCore by ThreadX) to improve security situation building an entire OS image from scratch so no one has fiddled around so far (if you use our build system and let it debootstrap a fresh Armbian image you know no one logged in so far and planted backdoors or stuff like that) So the OMV image is just a modified piece of data that was falling out of Armbian's build system (the rootfs) manually adjusted to be combined with the proprietary and closed source stuff that still makes up the whole 'RPi experience' (for the majority of RPi users I personally know the Pi is just a KODI and retrogaming box and without the proprietary video decoding and 3D acceleration stuff on the VideoCore they would have to throw it away)
     
    So all you get by using this OMV image is the illusion wrt 3) above (clean rootfs) but thinking this would enhance security or prevent you from being backdoored is just... an illusion. The main CPU of every RPi is still the outdated VideoCore from 2009 (this chip contains not only GPU, VPU and multiple QPU cores but also an ordinary dual core CPU inside running the closed source RTOS called ThreadX). The ARM cores running any secondary OS are just guests. Since Raspberries are popular for stuff like Tor end nodes or VPN gateways who knows whether GCHQ already ordered the Foundation to ship their ThreadX with an universal backdoor to allow agency access?)
     
    Anyway: maybe the OMV image's only real advantage is that it's Jessie based but still gets RPi Foundation's kernel updates (we do this via apt pinning since for whatever reasons someone at the Pi Foundation decided last year to cut off all their Jessie users from future kernel updates, weakening the security of all these installations). But if you're not running OMV (where OMV releases are bound to the underlying Debian release -- OMV 3 needs Jessie and OMV 4 simply wasn't ready last year) then why not using Stretch anyway? Raspbian isn't bad, it's just somewhat bloated.
     
    As usual the 2 most important things you can do to improve general RPi performance/responsiveness is
    Checking whether your powering is sufficient, you need to call perl -e "printf \"%19b\n\", $(vcgencmd get_throttled | -f2 -d=)" and have a look at the left whether there is a '1' (explanation). If you see there a 1 you're running 'frequency capped' at 600 MHz when performance would be needed. You must call vcgencmd since the usual ways to query CPU clockspeed are all fake (don't tell it over at the RPi forum -- for this you will get censored and banned  ) Throwing away your old and slow SD card and start with a new, genuine and A1 rated SD card of at least 16 GB in size (I would go with an A1 Extreme and not an A1 Ultra for the simple reason that most probably RPi folks with their next 'incremental Pi update' next year will finally implement SDR104 to speed up SD card access. This and better Wi-Fi is almost all they've left to once again sell their fanboys an 'improvement') Please note that both 'performance tweaks' are hardware and not software fixes.
  6. Like
    tkaiser reacted to Xalius in RPi.GPIO port for Rock64 (R64.GPIO)   
    Leapo has started porting RPi.GPIO to Rock64 as R64.GPIO, you can find her work in progress at
     
    https://github.com/Leapo/Rock64-R64.GPIO
     
    Original thread with instructions/updates is here:
     
    https://forum.pine64.org/showthread.php?tid=5902
     
    She also made a more simple tool to work with GPIO:
     
    https://github.com/Leapo/Rock64-BashGPIO
  7. Like
    tkaiser got a reaction from lanefu in Armbian for OrangePi PC2, AllWinner H5   
    Which 'outsourcing'? Allwinner sells hardware. Cheap hardware for special markets. Android tablets back then, $something in between, now smart speakers, dashcams, retro gaming stuff, again tablets and TV boxes. Nintendo's NES Classic sold 2.3 million units in no time. Using a boring A33 SoC with technology from 5 years ago running a smelly 3.4.39 kernel from 5 years ago. Their customers (that's not us) do not care so why should Allwinner care so far?
     
    They enable device manufacturers to throw out cheap hardware with somewhat working software with ok-ish margins (their main market) or sometimes enable their customers like Nintendo to sell insanely overpriced/overhyped products where again no-one cares about kernel, software or anything else we would be interested in.
     
    For Allwinner there's still no 'Linux market'. Though things might change in the future. But unless there's an incentive to mainline their own hardware and submit code upstream (good luck given the BSP code quality) I doubt anything will change soon or at all. But of course I highly appreciate that they now contribute and react in a very responsive way. As @jernej pointed out in the meantime many requests will be answered positively (and I always chuckle seeing wink, the Allwinner guy, directly contributing to linux-sunxi wiki now -- I never thought this would happen)
  8. Like
    tkaiser got a reaction from Slackstick in Support of Raspberry Pi   
    No. It's uninteresting for these reasons:
     
    RPi is a closed source platform, the main operating system bringing up and fully controlling the hardware (ThreadX running on the dual core VideoCore IV main CPU) can not be altered since... closed source and proprietary. All the relevant stuff happens completely there (DVFS for the ARM CPU cores, video decoding and so on) and the secondary OS running on the ARM cores has not even an idea what's going on (see for example all those RPi installations that constantly suffer from 'frequency capping'). All we as Armbians could do is to fiddle around with a secondary OS running on the 'guest' ARM CPU cores while everything interesting happens solely on the proprietary and closed source VideoCore. The community driven tries to replace ThreadX with something open sourced have stopped since the developers involved lost interest (some details)
     
    The new board is a power hog able to consume close to 1800mA by running some CPU intensive stuff alone (see at the bottom of this nice review). In the past due to poorly-designed power circuitry an RPi 3 was not able to exceed ~1.5A anyway (voltage drops then occured and under-voltage 'protection' kicked in). Now this has improved but unfortunately the Pi is still equipped with a Micro USB port to be powered (rated for 1800mA maximum!) so even more users will run into underpowering hassles.
     
    Headless RPi users are usually not even aware that they run under-volted (example, example, example, example) and if even under-voltage 'protection' (frequency capping turning the RPi into a 600 MHz board) doesn't help crashes, freezes, kernel panics occur that are usually considered 'software issues' (example). I believe the RPi people are well aware that powering with Micro USB is a shitty idea. But they do a great job masquerading the problem and try really hard to keep their users clueless (see this funny 'Micro USB powering' thread and how it ended -- archived version here, they love to censor over in their forum).
     
    TL;DR: it's not fun but simply boring to bring Armbian to the RPi and support situation for both users and us here will be a nightmare (users expecting everything Raspbian compatible while we would have to deal with the results of crappy Micro USB powering being reported as software bugs and users miseducated by RPi people not able to realize that under-voltage is the problem and their '3A PSU' won't help here anyway. RPi folks really try hard to let their users focus on BS amperage ratings instead of explaining the real problem.
     
    BTW: I know a bit what I'm talking about since having maintained an OS image for RPi 2, 3 and 3+ that gets downloaded +10,000 times each month.
  9. Like
    tkaiser got a reaction from Tido in Pi-factor cases   
    Since we already trashed the whole thread with all this thermal babbling... Pi 3 B+ without heatspreader: https://youtu.be/4LtL9e7JqxE?t=3m10s
     
  10. Like
    tkaiser got a reaction from joaofl in Android on H6 Boards   
    Replacing the .dtb file contents with the one for PineH64 works somewhat (at least the kernel uses the updated .dtb -- see the gmac-power0, gmac-power1 and gmac-power2 entries in serial console output). But then with the Xunlong image PineH64 panics: https://pastebin.com/h9G1kRQx
     
    Same modified image boots on an OPi Lite2 but... this Allwinner BSP crap is that horrible that it's really not worth a look (at least for the stuff I'm interested in). UAS is not supported, quick USB3 storage 'performance' test results in 40/45 MB/s read/write with an EVO840 SSD, no drivers included for any of the popular USB Ethernet dongles. I had the idea to do some tests with the BSP to get some baseline numbers but in this state this is all just a waste of time...
  11. Like
    tkaiser got a reaction from manuti in Armbian for OrangePi PC2, AllWinner H5   
    Which 'outsourcing'? Allwinner sells hardware. Cheap hardware for special markets. Android tablets back then, $something in between, now smart speakers, dashcams, retro gaming stuff, again tablets and TV boxes. Nintendo's NES Classic sold 2.3 million units in no time. Using a boring A33 SoC with technology from 5 years ago running a smelly 3.4.39 kernel from 5 years ago. Their customers (that's not us) do not care so why should Allwinner care so far?
     
    They enable device manufacturers to throw out cheap hardware with somewhat working software with ok-ish margins (their main market) or sometimes enable their customers like Nintendo to sell insanely overpriced/overhyped products where again no-one cares about kernel, software or anything else we would be interested in.
     
    For Allwinner there's still no 'Linux market'. Though things might change in the future. But unless there's an incentive to mainline their own hardware and submit code upstream (good luck given the BSP code quality) I doubt anything will change soon or at all. But of course I highly appreciate that they now contribute and react in a very responsive way. As @jernej pointed out in the meantime many requests will be answered positively (and I always chuckle seeing wink, the Allwinner guy, directly contributing to linux-sunxi wiki now -- I never thought this would happen)
  12. Like
    tkaiser got a reaction from manuti in Pi-factor cases   
    Sure, this is how it should work. Great that now even the RPi folks got it
     
    Yesterday I 'unboxed' Orange Pi Lite 2 (Allwinne H6). As small as the H3 Lite but extra thick PCB. After 10 minutes of idle operation the whole PCB including all receptacles is warm so the groundplane efficiently spreads the heat away from the SoC. I put 3 low performing heatsinks on SoC, PMIC and DRAM and reported SoC temperature went down from 49°C in idle to 46°C (after waiting the same 10 minutes or until temperature is stable).
     
    So still curious how efficient a 2mm thermal pad between PCB lower side and an aluminium enclosure would work (to transport heat out of an enclosure). To be clear: I'm talking about something like this (and am not willing to spend my own time on such tests any more since done with the low consumption/thermal stuff)
     
    Testing such stuff with enclosures that already exist seems impossible. The FLIRC is constructed wrongly and to buy the Wicked you must be mad.
  13. Like
    tkaiser got a reaction from manuti in Pi-factor cases   
    Sure. Better results compared to the board lying flat on a pillow
     
    Since latest RPi 3+ from last week now also started to copy what those cheap Orange Pi do since years (using the PCB ground plane as massive heatsink) I suggested this test over at RPi forum: https://archive.fo/6kzg0 ... and (not so) surprisingly the post got censored: https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=207863&start=225#p1286503 -- they really don't like it over there their users could get the idea that there grows more than Raspberries on this earth 
     
    BTW: really impressive how inefficient the old RPi 3 was and is from a thermal point of view:

     
    vs.

  14. Like
    tkaiser got a reaction from TonyMac32 in Pi-factor cases   
    But since it's so inefficient (again: https://youtu.be/mBSfb6vlfKo?t=6m37s ) at least it will fit on the new RPi 3 B+. The FLIRC ruins the thermal efficiency of the enclosure material with a huge gap between chips and enclosure filled with thick thermal pads. Since the BCM2387B0 on the new Raspi is higher they'll start to put two thermal pads to the package soon. The old thick and inefficient one for BCM2387 (overheating SoC needing good heat transfer) and a much thinner one for the new board that doesn't need a heatsink anyway. Well done or... it's just a typical 'Rasperry Pi product'. Eye candy and a good feeling and real result inefficient.
     
  15. Like
    tkaiser got a reaction from manuti in Some storage benchmarks on SBCs   
    Early 2018 update
     
    Time for another small update. It's 2018 now and since it seems Armbian will support a couple of RK3399 devices later this year let's have a closer look at the storage performance of them.
     
    RK3399 provides 2 individual USB3 ports which seem to share a bandwidth limitation (you get with a fast SSD close to 400MB/s on each USB3 port but with two SSDs accessed in parallel total bandwidth won't exceed 400MB/s). RK3399 also features a PCIe 2.1 implemenation with 4 lanes that should operate at Gen2 link speeds (that's 5GT/s so in total we would talk about 20GT/s if the SoC is able to cope with this high bandwidth). Rockchip changed their latest RK3399 TRM (technical reference manual) last year and downgraded specs from Gen2 to Gen1 (2.5GT/s or 10GT/s with 4 lanes). So there was some doubt whether there's an internal overall bandwidth limitation or something like that (see speculation and links here).
     
    Fortunately a Theobroma engineer did a test recently using Theobroma System's RK3399-Q7 with a pretty fast Samsung EVO 960 NVMe SSD: https://irclog.whitequark.org/linux-rockchip/2018-03-14 -- it seems RK3399 is able to deal with storage access at up to 1.6GB/s (yes, GB/s and not MB/s). This is not only important if you want ultra fast NVMe storage (directly attached via PCIe and using a way more modern and efficient protocol like ancient AHCI used by SATA) but also if RK3399 device vendors want to put PCIe attached SATA controllers on their boards. ODROID guys chose to go with an ASM1061 (single lane) on their upcoming N1 since they feared switching to a x2 (dual lane) chip would only increase costs while providing no benefits. But Theobroma's test results are an indication that even x4 attached controllers using all PCIe lanes could make reasonable use of the full PCIe bandwidth of 20GT/s.
     
    Below we'll now have a look at USB3/UAS performance and PCIe attached SATA using ASM1061 (both done on an ODROID N1 developer sample some weeks ago). Those tests still use my usual EVO840 SATA SSD so results are comparable. You see two ASM1061 numbers since one is made with active PCIe link state powermanagement and the other without (more or less only affecting access patterns with small block sizes).
     
    Then of course beeble's NVMe SSD tests are listed (here fio and there iozone -- numbers should also be valid for the other RK3399 devices where you can access all 4 PCIe lanes via M.2 key M or a normal PCIe slot: Rock960, NanoPC-T4 or RockPro64 (M.2 adapter needed then of course -- ayufan tested and got even better iozone numbers than beeble). And maybe later I'll add SATA and USB3 results from EspressoBin with latest bootloader/kernel.
     
    (for an explanation which boards represent which SoC and why please see my last post above)
    Random IO in IOPS Sequential IO in MB/sec 4K read/write 1M read/write RPi 2 under-volted 2033/2009 29 / 29 RPi 2 2525/2667 30 / 30 Pine64 (USB2/UAS) 2836/2913 42 / 41 Banana Pi Pro (SATA) 3943/3478 122 / 37 Wandboard Quad (SATA) 4141/5073 100 / 100 ODROID-XU4 (USB3/UAS) 4637/5126 262 / 282 ROCK64 (USB3/UAS) 4619/5972 311 / 297 EspressoBin (SATA) 8493/16202 361 / 402 Clearfog Pro (SATA) 10148/19184 507 / 448 RK3399 (USB3/UAS) 5994/6308 330 / 340 ASM1061 powersave 6010/7900 320 / 325 ASM1061 performance 9820/16230 330 / 330 RK3399-Q7 (NVMe) 11640/36900 1070 / 1150 As we can see RK3399 USB3 performance slightly improved compared to RK3328 (Rock64). It should also be obvious that 'USB SATA' as in this case using USB3/SuperSpeed combined with a great UAS capable USB-to-SATA bridge (JMicron JMS567 or JMS578, ASMedia ASM1153 or ASM1351) is not really that worse compared to either PCIe attached SATA or 'native SATA'. If it's about sequential performance then USB3 even outperforms PCIe attached SATA slightly. The 2 USB3 ports RK3399 provides when combined with great UAS capable bridges are really worth a look to attach storage to.
     
    NVMe obviously outperforms all SATA variants. And while combining an ultra fast and somewhat expensive NVMe SSD with a dev board is usually nothing that happens in the wild at least it's important to know how the limitations look like. As we've seen from the RK3399-Q7 tests with fio and larger blocksizes we get close to 1600 MB/s at the application layer which is kinda impressive for devices of this type. Another interesting thing is how NVMe helps with keeping performance up: This is /proc/interrupts after an iozone run bound to the 2 big cores (taskset -c 4-5): https://gist.github.com/kgoger/768e987eca09fdb9c02a85819e704122 -- the IRQ processing happens on the same cores automagically, no more IRQ affinity issues with all interrupts ending up on cpu0  
     
    Edit 1: Replaced Pine64 numbers made with EVO750 from last year with fresh ones done with a more recent mainline kernel and my usual EVO840
     
    Edit 2: Added Rasperry Pi 2 results from here.
     
    Edit 3: Added EspressoBin numbers from here.
     
  16. Like
    tkaiser reacted to joaofl in Android on H6 Boards   
    @tkaiser I'll upload it and drop you the link asap
  17. Like
    tkaiser reacted to Igor in Tinkerboard wifi setup without Network Manager   
    Development branch https://github.com/armbian/build/tree/development/packages/bsp/common/etc/network is moving completely towards Network manager, 1st boot setup with it was successfully tested. Worked with wireless and ethernet, with normal and predicted interfaces. It can be further expanded to the hidden network connection ..., but problems are expected on Jessie due to an old version of NM.
     
    The main problem with this topics issue is that wifi drivers for Tinker's onboard wifi are not in the best shape and if you want to use wpasuppliant to connect, you need to tell network manager to unmanage that particular wifi interface.
     
    @bulletim3 Checking this part of the script might give you some clues.
  18. Like
    tkaiser got a reaction from cyagon in ROCK64 2GB or ODroid HC1?   
    Honestly: what I would prefer doesn't really matter.
     
    Just a few notes:
    With latest 4.14 kernel there is HW accelerated crypto possible with Exynos 5422: https://wiki.odroid.com/odroid-xu4/software/disk_encryption -- no idea how easy it is to use though and you should keep in mind that Armbian's next branch still uses kernel 4.9 (the last try to switch to 4.14 ended up in bricked boards but once this is resolved this functionality should be available with Armbian too) RK3328 implements ARMv8 Crypto Extensions so at least software using AES will benefit from this automagically. @chwe pointed already to threads with performance numbers, you could also have a look in ODROID N1 thread to compare cryptsetup benchmark numbers (though made with RK3399 so you have to do the math on your own using RK3328/RK3399 openssl numbers to compare HC1 and Rock64 performance here) since you mentioned 'german customs fees': buying at Pollin should ease warranty handling and stuff: https://www.pollin.de/search?query=odroid hc1&channel=pollin-de Since you compare two boards running different architectures and since you seem to be keen on 'more DRAM' you should keep in mind that memory usage with a 64-bit userland can be much higher compared to running a 32-bit userland. See especially the links I reference here: https://www.raspberrypi.org/forums/viewtopic.php?f=91&t=192321#p1281326
     
    In case the tasks you want to use are memory hungry running with an arm64 userland on Rock64 can result in lower performance compared to HC1 using the same amount of memory. So choosing an armhf userland instead might be an option. Not possible with Armbian right now but since the excellent ayufan images exist a good basis could be his armhf OMV release: https://github.com/ayufan-rock64/linux-build/releases (grab latest image marked as 'release' and avoid 'pre-release' if you're not an expert). AFAIR Nextcloud should be installed best in a Docker container then...
  19. Like
    tkaiser got a reaction from joaofl in Android on H6 Boards   
    And for those still happy to get their hands dirty:
    http://files.pine64.org/os/sdk/H64-ver1.1/H6-lichee-v1.1.tar.gz (new version but still based on kernel 3.10.65) How to get their 4.9 kernel built: https://github.com/Allwinner-Homlet/H6-BSP4.9-brandy/pull/1/commits/9f24623786dd1bd83cac9938f81516e671a1e304 and why it's useless: https://github.com/Allwinner-Homlet/H6-BSP4.9-brandy/issues/2 So everything as expected
  20. Like
    tkaiser got a reaction from Dwyt in Learning from DietPi!   
    I would call the price/performance not good but simply awesome today given we get ultra performant cards like the 32GB SanDisk Ultra A1 for as low as 12 bucks currently: https://www.amazon.com/Sandisk-Ultra-Micro-UHS-I-Adapter/dp/B073JWXGNT/ (I got mine 2 weeks ago for 13€ at a local shop though). And the A1 logo is important since cards compliant to A1 performance class perform magnitudes faster with random IO and small blocksizes (which pretty much describes the majority of IO happening with Linux on our boards).
     
    As can be seen in my '2018 A1 SD card performance update' the random IO performance at small blocksizes is magnitudes better compared to an average/old/slow/bad SD card with low capacity:
    average 4GB card SanDisk Ultra A1 1K read 1854 3171 4K read 1595 2791 16K read 603 1777 1K write 32 456 4K write 35 843 16K write 2 548 With pretty common writes at 4K block size the A1 SanDisk shows 843 vs. 35 IOPS (IO operations per second) and with 16K writes it's 548 vs. 2 IOPS. So that's over 20 or even 250 times faster (I don't know the reason but so far all average SD cards I tested with up to 8 GB capacity show this same weird 16KB random write bottleneck -- even those normal SanDisk Ultra with just 8GB). This might be one of the reasons why 'common knowledge' amongst SBC users seems to be trying to prevent writing to SD card at all. Since the majority doesn't take care which SD cards they use, test them wrongly (looking at irrelevant sequential transfer speeds instead of random IO and IOPS) and chose therefore pretty crappy ones.
     
    BTW: the smallest A1 rated cards available start with 16GB capacity. But for obvious reasons I would better buy those with 32GB or even 64GB: price/performance ratio is much better and it should be common knowledge that buying larger cards 'than needed' leads to SD cards wearing out later.
     
  21. Like
    tkaiser got a reaction from joaofl in Android on H6 Boards   
    I asked ayufan in the meantime and he pointed me to https://github.com/ayufan-pine64/boot-tools/blob/with-drm/Makefile#L113-L116 for this. No idea about requirements/tools needed and where they can be found. Just had a quick look into https://github.com/Allwinner-Homlet/H6-BSP4.9-tools but immediately ran away after clicking around. It's the usual AW mess...
  22. Like
    tkaiser got a reaction from joaofl in Android on H6 Boards   
    Nope, that's the wrong one (only suitable for Allwinner's old 3.4 kernel). You need to modify the .dtb file or use the Allwinner BSP/SDK to start with sys_config.fex. All Ethernet related entries for the Orange board have to be taken from Xunlong otherwise it won't work.
     
    When dealing with this stuff 2 years ago I was a bit late to game. At that time longsleep already had done an amazing amount of work to cleanup the Allwinner BSP mess and we could deal with a build system that was at least somewhat useable. Back then he switched from sys_config.fex directly to .dts but in the meantime ayufan reverted to sys_config.fex with his builds (I searched for the relevant commits but did not find them. Maybe @Xalius has a clue how the initial conversion works).
     
    So you probably find some information here but I've no idea how filesystem structure looks like on those Android builds.
  23. Like
    tkaiser got a reaction from joaofl in Android on H6 Boards   
    Just a quick note: this sys_config.fex stuff is how Allwinner does hardware description since years and still these days even if the kernels they use now use something different called 'device tree'. So starting with their kernel 3.10 sys_config.fex is processed by some Allwinner tool to create a .dts and .dtb file (do a web search for 'A64 dev tree&sysconfig使用文档.pdf' to get 'nice' PDF describing the process)
     
    In case you want to adjust pin configuration (eg. which pins are used to attach the external Gigabit Ethernet PHY -- I hope you know that the real Ethernet controller is part of the SoC) you need to either edit sys_config.fex and then pipe the file through Allwinner's converter or you search for the already created .dtb file, convert it back into .dts, adjust things and convert back to .dtb (requires a Linux installation and the dtc tool which in Debian/Ubuntu is part of the device-tree-compiler package)
     
    It would be great if a moderator could split off all this 'Using Android on H6 devices' stuff my post included into an own thread below 'peer to peer support' since it really doesn't fit well here in this thread.
     
    Wrt getting Ethernet to work you need
    to edit the right file where pin mappings are defined sometimes deal with proprietary hacks (at least this was the case with Pine64) need to take care about something called tx/rx delays that are board specific You'll find a lot of the related info when searching around these issues with Pine64 two years ago. While I was partially part of the process back then for obvious reasons I consider this just a waste of time like everyone else who went through this already (dealing with this Allwinner BSP stuff in general). So from a Linux / Armbian point of view we need the relevant stuff being mainlined (community work) before we could consider starting to support any H6 device. Based on Allwinner's BSP this for sure will not happen.
     
  24. Like
    tkaiser got a reaction from Tido in board support - general discussion / project aims   
    Ahem, that's how open source projects usually work (see Linux kernel or u-boot for example -- @zador.blood.stained already commented on the real issue here)
     
    We have no testing branch, we have constant upgrade troubles breaking user installations and interpreting your answer this won't change anytime soon or at all. I can not use Armbian any longer for my own needs without preventing Armbian packages being updated. That's just bizarre and makes me sad.
     
    It's not about this specific case with this specific board. I'm talking about how we can prevent this horrible update experience happening again and again and again. Given the patch would've not been broken but really messed up sunxi u-boot... what would've happened? Why has this to happen?
     
    Why can't we agree on policies every developer is bound to? Before we add a new device a 'Board bring up' thread will be started where pros/cons and problems are discussed. I have the ugly hack they 'needed' to apply on my list since months but had waited for someone starting a Board Bring Up thread (since instructed to stop starting such threads when the device should NOT be supported by Armbian).
     
    If I understand correctly @Icenowy addressed the problem with changed card detect GPIO between pre-production and production boards already back in November last year (see mmc0 definition in her v2 patch series) and then SinoVoip as usual ignored everything and came up three weeks later with their ugly hack instead of relying on her work.
     
     
    Well maybe that's why policies are useful or even mandatory. IMO there's no need to suck in patches sitting in a board vendor's repo (known for having never contributed upstream anything so 'port and forget' code style has to be expected) especially if Armbian does still not provide any means of a sane testing/beta environment. It's all done in master branch which is IMO terrible.
  25. Like
    tkaiser got a reaction from chwe in Learning from DietPi!   
    Please be careful since this is an entirely different workload compared to 'using SD cards for the rootfs' since storing images and video is more ore less only sequential IO coming with a write amplification close to the optimum: 1. This means that the amount of data written at the filesystem layer, the block device layer and the flash layer are almost identical.
     
    With our use cases when running a Linux installation on the SD card this looks totally different since the majoritiy of writes are of very small sizes and write amplification with such small chunks of data is way higher which can result in 1 byte changed at the filesystem or block device layer generating a whole 8K write at the flash layer (so we have a worst case 1:8192 ratio of 'data that has changed' vs. 'real writes to flash cells')
     
    Please see here for the full details why this is important, how it matters and what it affects: https://en.wikipedia.org/wiki/Write_amplification
     
    Our most basic take on that in Armbian happens at the filesystem layer due to mount options since we use 'noatime,nodiratime,commit=600' by default. What do they do?
    noatime prevents the filesystem generating writes when data is only accessed (default is that access times are logged in filesystem metadata which leads to updated filesystem structures and therefore unnecessary writes all the time filesystem objects are only read) nodiratime is the same for directories (not that relevant though) commit=600 is the most important one since this tells the filesystem to flush changes back to disk/card only every 600 seconds (10 min) Increasing the commit interval from the default 5 to 600 seconds results in the majority of writes waiting in DRAM to be flushed to disk only every 10 minutes. Those changes sit in the Page Cache (see here for the basics) and add as so called 'dirty pages'. So the amount of dirty pages increases every 10 minutes to be set to 0 after flushing the changes to disk. Can be watched nicely with monitoring tools or something simple as:
    watch -n 5 grep Dirty /proc/meminfo While @Fourdee tries to explain 'dirty pages' would be something bad or even an indication for degraded performance it's exactly the opposite and just how Linux basics work with a tunable set for a specific use case (rootfs on SD card). To elaborate on the effect: let's think about small changes affecting only 20 byte of change every minute. With filesystem defaults (commit interval = 5 seconds) this will result in 80KB written within 10 minutes (each write affects at least a whole flash page and that's AFAIK 8K or 16K on most cards so at least 8K * 10) while with a 10 minute commit interval only 8KB will be written. Ten times less wear.
     
    But unfortunately it's even worse with installations where users run off low capacity SD cards. To my knowledge in Linux we still have no TRIM functionality with MMC storage (SD cards, eMMC) so once the total amount of data written to the card exceeds its native capacity the card controller has no clue how to distinguish between free and occupied space and has therefore to start to delete (there's no overwrite with flash storage, see for example this explanation). So all new writes now might even affect not just pages but whole so called 'Erase Blocks' that might be much larger (4MB or 16MB for example on all the cards I use). This is for example explained here.
     
    In such a case (amount of writes exceed card's native capacity) we're now talking about writes affecting Erase Blocks that might be 4MB in size. With the above example of changing 20 bytes every minute with the default commit interval of 5 seconds at the flash layer now even 40 MB would be written while with a 10 min commit interval it's 4MB (all with just 200 bytes having changed in reality).
     
    So if you really care about the longevity of your card you buy good cards with capacities much 'larger than needed', clone them from time to time to another card and now perform a TRIM operation manually by using your PC or Mac and SD Association's 'SD Formatter' to do a quick erase there. This will send ERASE (CMD38) for all flash pages to the card's controller which now treats all pages as really empty so new writes to the card from now on do NOT generate handling of whole Erase Blocks but happen at the page size level again (until the card's capacity is fully used, then you would need to repeat the process).
     
    There's a downside with an increased commit interval as usual and that affects unsafe power-offs / crashes. Everything that sits in the Page Cache and is not already flushed back to disk/card is lost in case a power loss occurs or something similar. On the bright side this higher commit interval makes it less likely that you run into filesystem corruption since filesystem structures are updated on disk also only every 10 minutes.
     
    Besides that we try to cache other stuff in RAM as much as possible (eg. browser caches and user profiles using 'profile sync daemon') and same goes for log files which are amongst those candidates that show worst write amplification possible when allowing to update logs every few seconds on 'disk' (unfortunately we can't throw logs just away as for example DietPi does it by default so we have to fiddle around with stuff like our log2ram implementation showing lots of room for improvements) 
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines