Jump to content

tkaiser

Members
  • Posts

    5462
  • Joined

Everything posted by tkaiser

  1. Thank you. So to focus on the relevant stuff (device tree definitions suitable for Allwinner's boring 3.10 kernel): PineH64.dts: https://pastebin.com/JzG8EBg7 OrangePiH6.dts: https://pastebin.com/ewNGZcS0 diff: https://pastebin.com/0Bq49EXq BTW: while Allwinner is using the suffix .fex for a lot of BSP stuff we shouldn't call those BLOBs fex files since misleading. Their fex format is how sys_config.fex looks like (and it's a bit sad that they still internally use this for describing the hardware and then convert to .dts later) while all those various concatenations of BLOBs are for whatever reasons named .fex too but are just some random BLOB containing Allwinner proprietary stuff.
  2. Since @joaofl wrote 'Then to repack them simply type cat *.dtb > boot_package.fex' no tool should be needed anyway. It's just a bunch of concatenated .dtb so all you need to know is the magic number to split the file into the individual '.dtb' files: https://github.com/file/file/blob/daaf6d44768677aca17af780bba0a451fbb69ac8/magic/Magdir/linux#L403 Edit: Not true since it's only one real .dtb file plus some Allwinner proprietary BLOBs with random suffixes
  3. Search for 'boot_package.fex' on the first page of this thread.
  4. I don't care about the 'quality' of their image since replacing this with an Armbian 64-bit userland is quite easy. I just don't want to waste too much time (as with A64 2 years ago), so starting with something that at least is known to boot (u-boot + BSP kernel) is the best way. All I need is the .dts file extracted from Pine's Android image @joaofl most probably has already lying somewhere around...
  5. Early 2018 update Time for another small update. It's 2018 now and since it seems Armbian will support a couple of RK3399 devices later this year let's have a closer look at the storage performance of them. RK3399 provides 2 individual USB3 ports which seem to share a bandwidth limitation (you get with a fast SSD close to 400MB/s on each USB3 port but with two SSDs accessed in parallel total bandwidth won't exceed 400MB/s). RK3399 also features a PCIe 2.1 implemenation with 4 lanes that should operate at Gen2 link speeds (that's 5GT/s so in total we would talk about 20GT/s if the SoC is able to cope with this high bandwidth). Rockchip changed their latest RK3399 TRM (technical reference manual) last year and downgraded specs from Gen2 to Gen1 (2.5GT/s or 10GT/s with 4 lanes). So there was some doubt whether there's an internal overall bandwidth limitation or something like that (see speculation and links here). Fortunately a Theobroma engineer did a test recently using Theobroma System's RK3399-Q7 with a pretty fast Samsung EVO 960 NVMe SSD: https://irclog.whitequark.org/linux-rockchip/2018-03-14 -- it seems RK3399 is able to deal with storage access at up to 1.6GB/s (yes, GB/s and not MB/s). This is not only important if you want ultra fast NVMe storage (directly attached via PCIe and using a way more modern and efficient protocol like ancient AHCI used by SATA) but also if RK3399 device vendors want to put PCIe attached SATA controllers on their boards. ODROID guys chose to go with an ASM1061 (single lane) on their upcoming N1 since they feared switching to a x2 (dual lane) chip would only increase costs while providing no benefits. But Theobroma's test results are an indication that even x4 attached controllers using all PCIe lanes could make reasonable use of the full PCIe bandwidth of 20GT/s. Below we'll now have a look at USB3/UAS performance and PCIe attached SATA using ASM1061 (both done on an ODROID N1 developer sample some weeks ago). Those tests still use my usual EVO840 SATA SSD so results are comparable. You see two ASM1061 numbers since one is made with active PCIe link state powermanagement and the other without (more or less only affecting access patterns with small block sizes). Then of course beeble's NVMe SSD tests are listed (here fio and there iozone -- numbers should also be valid for the other RK3399 devices where you can access all 4 PCIe lanes via M.2 key M or a normal PCIe slot: Rock960, NanoPC-T4 or RockPro64 (M.2 adapter needed then of course -- ayufan tested and got even better iozone numbers than beeble). And maybe later I'll add SATA and USB3 results from EspressoBin with latest bootloader/kernel. (for an explanation which boards represent which SoC and why please see my last post above) Random IO in IOPS Sequential IO in MB/sec 4K read/write 1M read/write RPi 2 under-volted 2033/2009 29 / 29 RPi 2 2525/2667 30 / 30 Pine64 (USB2/UAS) 2836/2913 42 / 41 Banana Pi Pro (SATA) 3943/3478 122 / 37 Wandboard Quad (SATA) 4141/5073 100 / 100 ODROID-XU4 (USB3/UAS) 4637/5126 262 / 282 ROCK64 (USB3/UAS) 4619/5972 311 / 297 EspressoBin (SATA) 8493/16202 361 / 402 Clearfog Pro (SATA) 10148/19184 507 / 448 RK3399 (USB3/UAS) 5994/6308 330 / 340 ASM1061 powersave 6010/7900 320 / 325 ASM1061 performance 9820/16230 330 / 330 RK3399-Q7 (NVMe) 11640/36900 1070 / 1150 As we can see RK3399 USB3 performance slightly improved compared to RK3328 (Rock64). It should also be obvious that 'USB SATA' as in this case using USB3/SuperSpeed combined with a great UAS capable USB-to-SATA bridge (JMicron JMS567 or JMS578, ASMedia ASM1153 or ASM1351) is not really that worse compared to either PCIe attached SATA or 'native SATA'. If it's about sequential performance then USB3 even outperforms PCIe attached SATA slightly. The 2 USB3 ports RK3399 provides when combined with great UAS capable bridges are really worth a look to attach storage to. NVMe obviously outperforms all SATA variants. And while combining an ultra fast and somewhat expensive NVMe SSD with a dev board is usually nothing that happens in the wild at least it's important to know how the limitations look like. As we've seen from the RK3399-Q7 tests with fio and larger blocksizes we get close to 1600 MB/s at the application layer which is kinda impressive for devices of this type. Another interesting thing is how NVMe helps with keeping performance up: This is /proc/interrupts after an iozone run bound to the 2 big cores (taskset -c 4-5): https://gist.github.com/kgoger/768e987eca09fdb9c02a85819e704122 -- the IRQ processing happens on the same cores automagically, no more IRQ affinity issues with all interrupts ending up on cpu0 Edit 1: Replaced Pine64 numbers made with EVO750 from last year with fresh ones done with a more recent mainline kernel and my usual EVO840 Edit 2: Added Rasperry Pi 2 results from here. Edit 3: Added EspressoBin numbers from here.
  6. Can you please provide the correct .dtb for PineH64 from the Android image? Yesterday this guy arrived and I thought I play the game the other way around (using Xunlong's Ubuntu 'server' image and tweak it to be usable with PineH64): BTW: This is the DT for Orange Pi One Plus: https://pastebin.com/ewNGZcS0
  7. Look into the USB3-A receptacle please. The huge contacts are for Hi-Speed (480 Mbps), the few tiny ones are for SuperSpeed (5 Gbps). Makes absolutelty no sense at all (you want the connection using the way higher link rate to use reliable contacts) but that's how USB3-A has been designed. Maybe you're just running into the usual USB3-A connector crappiness issue and re-inserting the cable with some force already fixes it? BTW: providing dmesg output (or armbianmonitor -u) would be useful.
  8. http://share.loverpi.com/board/libre-computer-project/libre-computer-board-roc-rk3328-cc/image/ubuntu/README.txt
  9. Maybe. There's one change wrt UAS between 4.9 and 4.14 but shouldn't apply here. Can you provide debug output please? sudo armbianmonitor -u
  10. In case you can measure/monitor voltage available to HC1/SSD you should do this. Most probably just the usual 'powered with 5V' sh*t show almost all SBC are affected with. Just in case you want to waste your time reading a rant: https://forum.odroid.com/viewtopic.php?f=149&t=30125&p=217733#p217733
  11. With Hardkernel's bootloader blobs it should be possible to let the S905 run with 1.7GHz and then numbers slightly improve. But the results were already known before since SoCs matter and not boards (see ODROID-C2 numbers, as expected: same S905, same performance at same clockspeed with same memory configuration -- yeah, the latter is where boards might make a small difference )
  12. I think I don't understand... As far as I know NM isn't touching any interface that is defined in /etc/network/interfaces, right? By default no wireless interface is referenced on any Armbian image in /etc/network/interfaces so nmtui simply will always work? We ship with a bunch of different interface files, the default one containing commented wlan0 entries. But at least on some Wi-Fi equipped boards (not all for whatever reasons) this interfaces.default file will be replaced at first boot with the entirely empty interfaces.network-manager file so people can even read inside that they should stop fiddling around and use nmcli/nmtui instead. So while I have to admit that our handling of /etc/network/interfaces* is somewhat brain-dead I still don't get your point... care to elaborate?
  13. In case you want to give the BPi Zero a 2nd chance it might be an option to try to poke the vendor as @chwe already suggested. If they would start to support their own hardware properly (not using ugly hacks but a sane fix as they got for free from community member Icenowy) and submit a sane pull request against Armbian's build system adding this board as CSC configuration would end up with users being able to build their own images (though @Sombohan got it working as he outlined in the other thread -- but we can not a add an ugly hack to the build system that breaks every other Allwinner board just to add one board most probably none of the developers have in their hands)
  14. Most probably not. This board has been added untested for whatever reasons to Armbian's build system with patches that can't work. Since one of those patches was rather dangerous it has been now removed and now someone needs to fix this crap or the remaining 4 files should be removed too: https://github.com/armbian/build/commit/075259c3456c6ef016460be18d3eeddc66e4fed1 The way this board is handled now in the build system is just an insane mess leading to our users wasting their time for nothing!
  15. This is default since almost 1.5 years now since NM is the only reasonable choice for inexperienced users to get Wi-Fi up and running in no time. It's not planned to remove nmtui and IMO there's also no need to document anything and especially NOT how to get rid of network-manager since this will be the first thing inexperienced users will do then just to struggle with all those outdated 'set up Wi-Fi on your Raspberry Pi by fiddling around in some text files' tutorials the Internet is flooded with If someone for whatever reasons wants to 'disable network manager' simply typing in these three words in either of the two search variants this site provides should be sufficient: Internal forum variant in the upper right corner: https://forum.armbian.com/search/?&q=disable network manager (as usual not that great results) 'Google site search' next to it: https://www.google.com/search?rls=en&q=disable+network+manager+site%3Aforum.armbian.com&oq=disable+network+manager+site%3Aforum.armbian.com (first hit explains everything)
  16. If you're running with at least kernel 4.4 I would let btrfs do the job and tell logrotate to stop compressing logfiles. Which is BTW exactly what Armbian is doing when running off a compressed btrfs (which is our default when using btrfs): https://github.com/armbian/build/blob/7927a540da0698593f6786ceb699f0844ecf6981/packages/bsp/common/etc/init.d/armhwinfo#L74-L80 But let's focus in this thread on swap usage (or zram!) and maybe better open up a new thread discussing log compression/rotation strategies...
  17. You might want to read carefully through this:
  18. I don't know why anyone is trying to use swap at all. Swapping to HDD or SD card is awfully slow and the majority of boards lacks a fast interface to use an SSD or fast eMMC for swap. So why not stopping to use swap and use zram instead? Or a board with appropriate amount of DRAM? With a recent Armbian installation all you need to do to use zram is to uncomment this line in /etc/init.d/armhwinfo, then adjust vm.swappiness in /etc/sysctl.conf (change from 0 to 60) and reboot. On most recent Ubuntu Xenial images zram is already active by default (check free output) so nothing needs to be done anyway...
  19. If it's an Allwinner device better look into linux-sunxi wiki first (and if someone finds stuff somewhere else please upload the documents where they belong to): http://linux-sunxi.org/Orange_Pi_One_Plus#See_also
  20. Just to save you some time: https://www.google.com/search?rls=en&q=webgl+site%3Aforum.odroid.com&oq=webgl+site%3Aforum.odroid.com vs. https://www.google.com/search?rls=en&q=webgl+site%3Aforum.armbian.com&oq=webgl+site%3Aforum.armbian.com 4 times more hits at forum.odroid.com while they only support a fraction of boards compared to Armbian. I would believe none of the devs had ever a look into WebGL or similar GPU centric stuff (at least I never connected any of my ODROIDs to a display ever)
  21. Well, usually it's the other way around. First developers discuss what they think a problem is, then they try to find a solution for this problem involving an appropriate strategy to deal with weighing pros and cons (e.g. discussion how to handle bug fixes and hot fixes first, then implementing something that suits these needs). IMO our problem is lack of communication and coordination (see the 5.38 update). The main area we should focus on IMO is 'upgrade experience' since while it's quite OK if new images don't work or are malfunctioning users get it that there's something wrong rather quick -- they might only loose some time. But with updates ending up with bricked boards it's a real mess since we destroy productive (server) installations. Currently Armbian is an example how to not do Debian: combining the disadvantages of stable/oldstable (everything outdated as hell) with the disadvantages of testing (do an update and we will brick your board). Addressing this would require getting an idea how to deal with upstream updates (since those introduce an awful lot of problems constantly). Do we need to update u-boot every two months? Can we afford doing this given that 'Team Testers' is obviously an illusion while testing all possible upgrade scenarios would be very time consuming? Do we need to force bleeding edge mainline kernel always? There's a reason Debian and Ubuntu use their own kernels and backport fixes. That's not a solution for us of course since we are both lacking manpower and try the impossible: supporting an insanely high amount of different platforms. IMO these are the problems that need solutions or at least a strategy since obviously the way we do it now does not work at all (not at this scale with way too many boards and platforms and that amount of people willing to spend time on the project). What were the results of trying to solve these problems so far? Many hours/days spent on discussions and then... nothing (some actionism and the usual lack of communication and coordination again).
  22. This is not that much about Armbian. This is just about flash storage basics! But no one wants to read through the above or such complex and boring stuff like this: https://forum.armbian.com/topic/6444-varlog-file-fills-up-to-100-using-pihole/?do=findComment&comment=50833 -- no one wants to deal with the differences between the affected layers (filesystem, block device, flash memory). Users usually prefer simple answers (even if wrong) over complex ones. That's why a distro should choose sane defaults (a trade-off as usual since a high commit interval also affects the amount of data lost after a crash) and that's why educating users is needed. But it seems Armbian is one of the few places where that happens (or maybe even just my essays here in the forum?). The problem with SD cards and SBC is the huge amount of ignorance and stupidity we all have to deal with. People are even today told to focus only on BS performance metrics (e.g. sequential read and write performance instead of the only thing that's really important: random IO performance, here especially writes and this especially with small to medium blocksizes). People are even today encouraged to use the most shitty SD cards possible instead of telling them the truth to better buy a new and large enough one with great random IO performance. Since an awful lot of tasks depends on sufficient (random) IO performance and you simply won't be happy with a crappy SD card unless you use it read-only.
  23. But that (monitoring in almost real time and not looking at accumulative values) is the only way it works. You're not interested in amount written at the filesystem or block device layer but at the flash layer below. Since that's the only layer that matters here and due to the nature of flash storage a lot is different here compared to spinning rust (HDDs). We're back at 'write amplification' again: https://en.wikipedia.org/wiki/Write_amplification Overwriting one byte physically on a flash medium every second ends up in 3600 bytes changed after an hour at the filesystem layer. At the block device layer below since filesystem structures are also affected this will be already a lot more since each time you flush filesystem contents to disk metadata like 'last access time' will also be updated (depends on the filesystem in question how efficient this is done) At the flash layer below the amount of data written is magnitudes higher since flash can not overwrite data. If you want to change something flash storage devices have to reorganize the contents of 'pages'. As long as there are reserve or unused pages the data from the page to be modified gets read from the original page, this page then will be added to the pool of pages that are soon to be erased (organized in much larger Erase Blocks) and the contents of the original page plus the changes get written to another new page. Common page sizes are 8K or even 16K these days so 1 byte changed at the filesystem layer is 8192 or 16384 bytes at the flash layer. The amount of wear is roughly 8,000 or even 16,000 times higher when changes at the filesystem layer are always flushed immediately to SD card. And that's what you have to look for with above mentioned tools: how often and in which intervals data gets really written at the block device layer since this will end up with the flash translation layer below writing at least one full new page. Using the example above (Overwriting exactly one byte every second for a whole hour) and now looking at filesystem commit intervals: Always directly flushing all contents to 'disk': for each byte written at the filesystem layer at least one whole page at the flash layer will be completetly written. Let's use 8K page size and we end up with 3600 bytes changed at the filesystem layer in 3600 * 8192 bytes --> 28 MByte at the flash layer Using filesystem default commit intervals, e.g. ext4 which uses 5 seconds. 3600 bytes at the filesystem layer result in (3600 / 5) * 8192 --> 5.6 MByte at the flash layer Now let's take Armbian's commit interval of 600 seconds. The same 3600 bytes at the filesystem layer result in (3600 / 600) * 8192 --> 48 KByte at the flash layer Appending something to a file (e.g. one line in a logfile) is 'changing a block' at the filesystem layer. Filesystems are usually organized in small entities called blocks or chunks and when you create the filesystem you define also this block or chunk size. If you do not define this on your own the filesystem utilities choose defaults (usually depending on the size of the filesystem: larger fs --> larger block sizes). With ext4 you would check for this with tune2fs -l /dev/mmcblk0p1 | grep '^Block size' The number returned here (with Armbian and average size SD cards usually 1K) is what will be later changed at the block device layer when a sync call is issued. So a 50 byte new log entry when commited to 'disk' will end up with 1024 bytes at the filesystem layer and depending on how many of these filesystem blocks have to be flushed to disk at the same time to at least one page at the flash layer. So with an application that always tries to flush new log contents immediately to disk you end up with changes happening every second at 28 MB written at the flash layer. When the application doesn't try to flush the contents to disk but lets the filesystem do its job, then the commit interval gets important. With the default 5 seconds we're now at 5.6 MB while with Armbian's 600 seconds it's hundred times less: just a few KB. That's the most important part when dealing with flash storage: Understanding the effects of unnecessarily high write amplification and how to reduce this. Unfortunately this is both boring and complex at the same time and users usually prefer simple or even oversimplified BS answers to the question 'how can I prevent my SD card wearing out too fast?' So it's great that your test confirmed that pi-hole/dnsmasq do not try to flush their log contents immediately to disk and affected users can simply switch log2ram off until we might come up with a better replacement for log2ram. With our 600 second commit interval amount of flash wear out is pretty minimal BTW: The above numbers talk about the rather optimistic assumption that only 8K per page have to be written. If the total amount of data written to the SD card at the flash layer exceeded already the card's native capacity (writing 8.1 GB to an 8GB SD card for example) then it might get much much worse since now since we can't make use of TRIM with SD cards and so no more free pages are available and the flash translation layer now always has to erase stuff first (affecting much larger Erase Blocks, their size is usually several MB compared to the small 8K/16K pages). More recent quality cards do a pretty good job here. They use an overprovisioning area with a sane amount of reserve pages and do data reorganization in the background (wear leveling and garbage collection). The problem is with crappy, old and slow cards with small capacity. Those really do not deal good with this situation they run into also much faster due to their small capacity. That's why we try to educate our users constantly to throw those old and crappy SD cards away and better invest a few bucks in a new and great performing SD card like those mentioned here:
  24. That's surprising since Hardkernel AFAIK chose to run on their OS images at full speed all the time (while Armbian and OMV choose to provide full performance when needed but let the CPU rest when there's nothing to do. We use ondemand cpufreq governor for a reason and we implement some tweaks needed for IO workloads. Maybe their Ubuntu image suffers from an Ubuntu bug on ARM I explained already here and there?). Anyway: performance can suck for a variety of reasons and often it's not related to the latest thing one did (eg. trying out another distro or stuff like that). Only active benchmarking can give answers to such questions but unfortunately that's always very expensive in terms of time 'wasted' to get a clue what's really going on
  25. Honestly: what I would prefer doesn't really matter. Just a few notes: With latest 4.14 kernel there is HW accelerated crypto possible with Exynos 5422: https://wiki.odroid.com/odroid-xu4/software/disk_encryption -- no idea how easy it is to use though and you should keep in mind that Armbian's next branch still uses kernel 4.9 (the last try to switch to 4.14 ended up in bricked boards but once this is resolved this functionality should be available with Armbian too) RK3328 implements ARMv8 Crypto Extensions so at least software using AES will benefit from this automagically. @chwe pointed already to threads with performance numbers, you could also have a look in ODROID N1 thread to compare cryptsetup benchmark numbers (though made with RK3399 so you have to do the math on your own using RK3328/RK3399 openssl numbers to compare HC1 and Rock64 performance here) since you mentioned 'german customs fees': buying at Pollin should ease warranty handling and stuff: https://www.pollin.de/search?query=odroid hc1&channel=pollin-de Since you compare two boards running different architectures and since you seem to be keen on 'more DRAM' you should keep in mind that memory usage with a 64-bit userland can be much higher compared to running a 32-bit userland. See especially the links I reference here: https://www.raspberrypi.org/forums/viewtopic.php?f=91&t=192321#p1281326 In case the tasks you want to use are memory hungry running with an arm64 userland on Rock64 can result in lower performance compared to HC1 using the same amount of memory. So choosing an armhf userland instead might be an option. Not possible with Armbian right now but since the excellent ayufan images exist a good basis could be his armhf OMV release: https://github.com/ayufan-rock64/linux-build/releases (grab latest image marked as 'release' and avoid 'pre-release' if you're not an expert). AFAIR Nextcloud should be installed best in a Docker container then...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines