• Posts

  • Joined

 Content Type 


Member Map




Everything posted by tkaiser

  1. NM works great in CLI only mode and other distros than 'everything outdated as hell' (AKA Debian) use it for some time especially in server environments. If you don't need a desktop environment switching to Ubuntu Bionic might also be an idea (depends on what you're doing -- more recent package versions built with a more recent compiler sometimes result in stuff being done faster)
  2. Search for 'eth0: Link is Up' in there (doesn't appear for every boot since logging in Armbian is currently broken but nobody cares). The first occurrence shows only 100 MBits/sec negotiation w/o flow control and then no DHCP lease assigned. I would suspect cabling/contact problems on first boot 'magically' resolved later. There's no need to manually assign an IP address and please if you want to do so then keep the anachronistic /network/interfaces files blank and use nmtui instead. That's the relevant log lines: [ 13.398258] rk_gmac-dwmac ff540000.ethernet eth0: Link is Up - 100Mbps/Full - flow control off [ 7.441246] rk_gmac-dwmac ff540000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx [ 7.207272] rk_gmac-dwmac ff540000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx [ 15.023224] rk_gmac-dwmac ff540000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx
  3. Do you use this board to boil water? 75°C SoC temperature reported at boot? I don't know how the thresholds are defined for 'emergency shutdown' but in this situation some more CPU activity alone can result in Armbian shutting down. If you want to diagnose a problem you need to diagnose it. Some problems arise over time. 'Worked fine for months and now unstable' happens for various reasons. Human beings usually then blame immediately the last change they remember (like upgrading the OS) instead of looking for the culprit. Hopefully it still works but in your situation I would immediately install RPi-Monitor using armbianmonitor -r You get then a monitor instance running on your board and can look what happens and happened: Hahaha! I run a bunch of Ubuntu and Debian servers with lowered DRAM, huge memory overcommitment (300% and not just the laughable 50% as with Armbian defaults) and of course vm.swappiness set to 100. No problem whatsoever. If you think swapping is the culprit you need to monitor swap usage! At least your output shows memory and swap usage that is not critical at all. If you want to get the culprit whether this is related to swap you need to run something like 'iostat 600 >/root/iostat.log' and this in the background: while true ; do echo -e "\n$(date)" >>/root/free.log free -m >>/root/free.log sleep 600 done Then check these logs whether swapping happened. An alternative would be adjusting RPi-Monitor templates but while this is quite easy nobody will do this of course. Zram is a kernel thing and not related to any userland stuff at all. In other words: you have the same set of problems on a MicroServer and an ARM SBC running different software stacks? Are SBC and MicroServer connected to the same power outlet? Unfortunately currently logging in Armbian is broken (shutdown logging and ramlog reported by @dmeey and @eejeel) but nobody cares (though @lanefu self assigned the Github issue). Great timing to adjust some memory related behavior and at the same time accepting that the relevant logging portions at shutdown that allow to see what's happening 'in the field' do not work any more)
  4. Ok, I give up. Do whatever f*ck you want with zram from now on in Armbian. I really can't stand it any more. Let's do it like grandma already did it! That's the only way things work! Why monitoring stuff? Why analyzing problems? Why doing some research? Assumptions, assumptions, assumptions! Nothing but assumptions!
  5. This needs to be measured since 'feeling a difference' is always BS. I already referenced a perfect example: an Android app allowing silly things: The author of at least the text clearly has not the slightest idea how VM (virtual memory) works in Linux, the whole thing is based on a bunch of totally wrong assumptions ('swap is bad', 'cleaning' memory, vm.swappiness being some sort of a percentage setting, Android developers being total morons since shipping their devices defaulting to 'very slow'). The icon showing a HDD already tells the whole sad story and the totally useless recommendation to reboot for this setting to take effect demonstrates what a bunch of BS this all is. And you'll find a a lot of people telling you their phone would be faster after they switched from 'Very slow' to 'Fast'). It's simple psychology. Same with server migrations over the weekend that didn't happen for whatever reasons (missing prerequisites): On Monday users will report the server would be faster ('Really good job, guys!') and at the same time they will report that a printer in their department doesn't work any more (the one that has been replaced by another one already two weeks ago). Been there, experienced this too many times so that in the meantime we do not announce any server migrations any more. If we're talking about expectations and efforts made human beings literally feel that something improved (it must have improved since otherwise no efforts would have been taken, true?). Same with 'change'. Change is evil so better let's stay with a somewhat undocumented default setting back from 15 years ago almost everybody does not understand. For the simple reason that it looked like a reasonable default to one or even a few kernel developers back at that time in combination with all the other VM tunables when swap was HORRIBLY EXPENSIVE since happening on ultra slow HDDs. Talking about 'the other VM tunables': Why is no one talking about these guys but only about vm.swappiness? root@nanopim4:~# ls -la /proc/sys/vm/ total 0 dr-xr-xr-x 1 root root 0 Sep 19 15:47 . dr-xr-xr-x 1 root root 0 Jan 18 2013 .. -rw-r--r-- 1 root root 0 Sep 25 11:55 admin_reserve_kbytes -rw-r--r-- 1 root root 0 Sep 25 11:55 block_dump --w------- 1 root root 0 Sep 25 11:55 compact_memory -rw-r--r-- 1 root root 0 Sep 25 11:55 compact_unevictable_allowed -rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_background_bytes -rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_background_ratio -rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_bytes -rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_expire_centisecs -rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_ratio -rw-r--r-- 1 root root 0 Sep 25 11:55 dirtytime_expire_seconds -rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_writeback_centisecs -rw-r--r-- 1 root root 0 Sep 25 11:55 drop_caches -rw-r--r-- 1 root root 0 Sep 25 11:55 extfrag_threshold -rw-r--r-- 1 root root 0 Sep 25 11:55 extra_free_kbytes -rw-r--r-- 1 root root 0 Sep 25 11:55 hugepages_treat_as_movable -rw-r--r-- 1 root root 0 Sep 25 11:55 hugetlb_shm_group -rw-r--r-- 1 root root 0 Sep 25 11:55 laptop_mode -rw-r--r-- 1 root root 0 Sep 25 11:55 legacy_va_layout -rw-r--r-- 1 root root 0 Sep 25 11:55 lowmem_reserve_ratio -rw-r--r-- 1 root root 0 Sep 25 11:55 max_map_count -rw-r--r-- 1 root root 0 Sep 25 11:55 min_free_kbytes -rw-r--r-- 1 root root 0 Sep 25 11:55 mmap_min_addr -rw------- 1 root root 0 Sep 25 11:55 mmap_rnd_bits -rw------- 1 root root 0 Sep 25 11:55 mmap_rnd_compat_bits -rw-r--r-- 1 root root 0 Sep 25 11:55 nr_hugepages -rw-r--r-- 1 root root 0 Sep 25 11:55 nr_overcommit_hugepages -r--r--r-- 1 root root 0 Sep 25 11:55 nr_pdflush_threads -rw-r--r-- 1 root root 0 Sep 25 11:55 oom_dump_tasks -rw-r--r-- 1 root root 0 Sep 25 11:55 oom_kill_allocating_task -rw-r--r-- 1 root root 0 Sep 25 11:55 overcommit_kbytes -rw-r--r-- 1 root root 0 Sep 19 15:47 overcommit_memory -rw-r--r-- 1 root root 0 Sep 25 11:55 overcommit_ratio -rw-r--r-- 1 root root 0 Sep 25 08:15 page-cluster -rw-r--r-- 1 root root 0 Sep 25 11:55 panic_on_oom -rw-r--r-- 1 root root 0 Sep 25 11:55 percpu_pagelist_fraction -rw-r--r-- 1 root root 0 Sep 25 11:55 stat_interval -rw-r--r-- 1 root root 0 Sep 25 06:31 swappiness -rw-r--r-- 1 root root 0 Sep 25 11:55 user_reserve_kbytes -rw-r--r-- 1 root root 0 Sep 24 15:58 vfs_cache_pressure
  6. Powered via Micro USB from an USB port of the router next to it (somewhat recent Fritzbox). Storage was a 128 GB SD card, no peripherals, cpufreq settings limited the SoC to 1.1V (fixed 912 MHz) and so maximum consumption was predictable anyway (way below 3W). I could've allowed the SoC to clock up to 1.2GHz at 1.3V but performance with this specific use case (torrent server) was slightly better with fixed 912 MHz compared to letting the SoC switch between 240 MHz and 1200 MHz as usual. I used one of my 20AWG rated Micro USB cables but honestly when it's known that the board won't exceed 3W anyway every Micro USB cable is sufficient.
  7. BTW: Once we can collect some experiences with the new settings I have the following two tunables on my TODO list: (adjusting from HDD behavior to zram) vm.vfs_cache_pressure (playing with this based on use case) But since we support more and more 64-bit boards the best method to better deal with available DRAM in low memory conditions would be using a 32-bit userland. But that's something literally no one is interested in or aware of.
  8. Ok, this is what you were explaining about vm.swappiness: This is what the majority of 'people on the Internet' will also tell us (see here for a nice example which couldn't be more wrong). While someone else linking to the swap_tendency = mapped_ratio/2 + distress + vm_swappiness formula put it this way: So given that vm.swappiness is mostly understood wrongly, especially is not what it seems to be (percentage of 'free memory' as you were explaining) and that the only reason to recommend vm.swappiness=60 seems to be just the usual 'Hey let's do it this way since... grandma already did it the same way' I think there is not that much reason to further continue this discussion on this basis especially given that there is zero productive contribution when dealing with such issues as 'making better use of available DRAM'. We are still talking not about swapping out pages to ultra slow HDD storage but sending them to a compressed block device in RAM. There is NO disk access involved so why should we care about experiences from the past talking about spinning rust and 'bad real-world experiences' with high application latency. Everyone I know experimenting with zram in low memory situations ended up with setting vm.swappiness to the maximum. Unless someone does some tests and is able to report differences between vm.swappiness=60 and 100 wrt 'application performance' there's no reason to change this. We want swapping to happen since in our case this is not ultra slow storage access but compressing memory (making better use of the amount of physical DRAM). Of course not. We still want to make better use of RAM so swapping (compressing memory) is essentially what we want. If the value is really too high we'll know soon and can then adjust the value to whatever more appropriate value. But on the basis that vm.swappiness is simply misunderstood, the default is back from the days we swapped to spinning rust and today we want swapping to happen to free up RAM by compressing portions of it I see no further need to discuss this now. If users report soon we'll see... I had some hope you would come up with scenarios how to reasonably test for this stuff (you mentioned MariaDB)...
  9. My understanding of vm.swappiness based on this and that is quite the opposite. It is not a percentage as explained here. My understanding is the whole relevance of vm.swappiness is dealing with swapping out application memory vs. dropping filesystem cache (influenced by setting vm.vfs_cache_pressure which defaults to 100). 'Everyone on the Internet' is telling you high vm.swappiness values favor high IO performance over app performance/latency. Every test I made so far tells a different story as long as 'zram only' is considered. The tests I already did were with heavy memory overcommitment (running jobs that require 2.6 GB physical memory with just 1 GB RAM available). All the tests I did with lower swap activity also told me to increase vm.swappiness to the maximum possible (since apps finishing faster). Another 4 hours spent on the following test: Again compiling the ARM Compute Library. This time on a NanoPi M4 with 4 GB DRAM and kernel 4.4. First test with almost empty filesystem cache (no swapping happened at all even with vm.swappiness=100 -- just as expected): real 31m54.478s user 184m16.189s sys 5m57.955s Next test after filling up filesystem caches/buffers to 2261 MB with vm.swappiness=100: real 31m58.972s user 184m40.198s sys 5m58.623s Now the same with vm.swappiness=60 and again filling up caches/buffers up to 2262 MB: real 32m1.119s user 184m28.312s sys 5m59.243s In other words: light zram based swapping due to filesystem caches already somewhat filled with vm.swappiness=100 increased benchmark execution time by 4.5 seconds or ~0.2%. By switching to vm.swappiness=60 execution time increased by another 2.2 seconds (or ~0.1%). Rule of thumb when benchmarking: differences lower than 5% should be considered identical numbers. So I still need a testing methodology that could convince me that a Linux kernel default setting made 15 years ago when we had neither intelligent use of DRAM (zram/zswap) nor fast storage (SSDs) but only horribly low performing spinning rust that is magnitudes slower than accessing DRAM makes any sense any more. Background monitoring for vm.swappiness=100 and vm.swappiness=60. @Tido since I don't want to waste additional hours on this I hope you choose an appropriate testing methodology and monitoring when you provide insights why you feel vm.swappiness=100 being the wrong number. Just for the record: while zram is a nice improvement I still consider the situation with Linux (or Android where zram originates from) horrible compared to other operating systems that make way better use of physical RAM. When I run any Mac with Linux I would need twice the amount of RAM compared to running the same hardware with macOS for example (most probably applies to Android vs. iOS as well prior to latest Android changes to finally make better use of RAM and allow the majority of Android devices to run somewhat smoothly with as less as 1 GB)
  10. Well, but that's what the extensive monitoring is for: to be able to throw results away quickly (no fire&forget benchmarking since only producing numbers without meaning). The kernel reported high %sys and %io activity starting at the end of the single threaded 7-zip benchmark and that's the only reason your 7-zip numbers are lower than mine made on PineH64. Same H6, same type of memory, same u-boot, same kernel, same settings --> same performance. IMO no more tests with improved cooling needed. The only remaining question is how good heat dissipation of both boards would be with otherwise identical environmental conditions (PineH64's PCB is rather huge and in both boards the copper ground plane is used as 'bottom heatsink' dissipating the heat away. But most probably PineH64 performs way better here with an appropriate heatsink due to larger PCB). But such a test is also pretty useless since results are somewhat predictable (larger PCB wins) and type of heatsink and whether there's some airflow around or not will be the more important factors. If heat dissipation problems are solved both boards will perform absolutely identical.
  11. Exactly same numbers as PineH64 which is not that much of a surprise given same type of memory is used with same settings. Your 7-zip numbers seem to be lower but that's just some background activity trashing the benchmark as the monitoring reveals. If you see %sys and especially %iowait percentage in the monitoring output you know you need to repeat the benchmark and stop as many active processes as possible prior to benchmark execution: System health while running 7-zip multi core benchmark: Time CPU load %cpu %sys %usr %nice %io %irq Temp 16:15:10: 1800MHz 5.63 23% 1% 18% 0% 3% 0% 25.0°C 16:15:31: 1800MHz 5.05 84% 1% 83% 0% 0% 0% 48.2°C 16:15:51: 1800MHz 4.77 86% 1% 84% 0% 0% 0% 43.1°C 16:16:31: 1800MHz 5.15 88% 15% 53% 0% 19% 0% 39.6°C 16:16:51: 1800MHz 4.94 80% 1% 78% 0% 0% 0% 42.7°C 16:17:11: 1800MHz 4.82 92% 1% 89% 0% 0% 0% 45.0°C 16:17:31: 1800MHz 4.64 87% 1% 85% 0% 0% 0% 41.9°C 16:17:52: 1800MHz 4.74 94% 16% 72% 0% 5% 0% 43.8°C 16:18:13: 1800MHz 4.69 81% 1% 80% 0% 0% 0% 48.6°C 16:18:33: 1800MHz 4.56 86% 1% 84% 0% 0% 0% 39.5°C 16:19:28: 1800MHz 6.93 84% 12% 38% 0% 34% 0% 31.2°C I bet unattended-upgrades was running in the background (you could check /var/log/dpkg.log)
  12. Since it started here you should read the whole thread first:
  13. I started with this 'zram on SBC' journey more than 2 years ago, testing with GUI use cases on PineBook, searching for other use cases that require huge amounts of memory, testing with old as well as brand new kernel versions and ending up with huge compile jobs as an example where heavy DRAM overcommitment is possible and zram shows its strengths. Days of work, zero help/contributions by others until recently (see @botfap contribution in the other thread). Now that as an result of this work a new default is set additional time is needed to discuss about feelings and believes? Really impressive... Care to elaborate what I did wrong when always running exactly the same set of 'monitoring' with each test (using a pretty lightweight 'iostat 1800' call which simply queries the kernel's counters and displays some numbers every 30 minutes)? Why should opinions matter if there's no reasoning provided? I'm happy to learn how and what I could test/modify again since when starting with this zram journey and GUI apps I had no way to measure different settings since everything is just 'feeling' (with zram and massive overcommitment you can open 10 more browsers tabs without the system becoming unresponsive which is not news anyway but simply as expected). So I ended up with one huge compile job as worst case test scenario. I'm happy to learn in which situations with zram only a vm.swappiness value higher than 60 results in lower performance or problems. We're talking about Armbian's new defaults: that's zram only without any other swap file mechanism on physical storage active. If users want to add additional swap space they're responsible for tuning their system on their own (and hopefully know about zswap which seems to me the way better alternative in such scenarios) so now it's really just about 'zram only'. I'm not interested in 'everyone will tell you' stories or 'in theory this should happen' but real experiences. See the reason why we switched back to lzo as default also for zram even if everyone on the Internet tells you that would be stupid and lz4 always the better option.
  14. I don't get your conclusion. The kernel has no idea what's going on. Look at your own output: You suffer from max real cpufreq being limited to 1200 MHz once the SoC temperature exceeds 60°C (you can 'fix' this by adding 'temp_soft_limit=70' to /boot/config.txt and then reboot to let ThreadX switch back to old behavior) and as soon as 80°C is exceeded fine grained throttling starts further decreasing real ARM clockspeeds while the kernel still reports running at 1400 MHz since the mailbox interface between kernel and ThreadX returns requested and not real values)
  15. Really looking forward to this HAT BTW: I've not the slightest idea how much efforts and initial development costs are needed for such a HAT. But in case the Marvell based 4-port SATA solution will be somewhat expensive maybe designing another one with an ASMedia ASM1062 (PCIe x2 and 2 x SATA) and just one Molex for 2 drives might be an idea. Could be one design made for NEO4 that will work on M4 too with a larger (or dual-purpose) HAT PCB?
  16. With Edge+Captain I would agree. But then Khadas realized that they need something like Edge-V in Vim/Vim2 form factor for end users. Still the necessary accessories to reliably power the board (fully USB PD compliant USB-C charger) and to provide proper heat dissipation are IMO way too expensive. I don't want to spend ~60 bucks for PSU + fansink and then pay an additional 110 bucks for an RK3399 design with just 2 GB RAM.
  17. Depends. IMO the vast majority of problems with suddenly dying flash media (SD cards or USB pendrives) is related to fraud flash: flash memory products that fake their capacity so all of a sudden they stop working once the total amount of data written to exceeds the drive's real capacity (see here for a tool to check for this). If you manage to buy genuine flash products (not the brands matter but where you buy -- with eBay, Aliexpress & Co. chances to get fake flash are most probably well above 50%) then there are still huge differences. Pine's cheap FORESEE eMMC modules with low capacity are way slower than the Samsung or SanDisk (Pine and other SBC vendors use for their higher capacity modules). But no idea about reliability since AFAIK all you can do here is to trust and believe since without extensive testing it's impossible to predict longevity of SD cards, eMMC and USB flash storage. My personal take on this is trying to minimize writes to flash storage ('Write Amplifcation' matters here, keeping stuff like logs in RAM and only write them once per hour and so on) When using low-end flash storage preferring media that supports TRIM (that's SD cards and most probably also eMMC supporting ERASE CMD38 -- you can then TRIM them manually from time to time and maybe we get filesystem drivers in Linux that will support TRIM on (e)MMC too in the future) Weighing eMMC with A1 rated SD cards If huge amounts of important data need to be written to the media then always using SSDs that can be queried with SMART. The better ones provide a SMART attribute that indicates how much the storage is already worn out Some more info:
  18. Huh? This script is not 'tuned' whatsoever. It basically sets up some zram devices in an outdated way (since recent kernels do not need one zram device per CPU core, this could have even negative effects on big.LITTLE designs and that's why we made all of this configurable in Armbian via /etc/default/armbian-zram-config). vm.swappiness... the 'default' is from 100 years ago when we had neither fast flash storage nor compressed zram block devices. Back then swapping happened on spinning rust! With zram any value lower than 100 makes no sense at all.
  19. Retail prices have been disclosed so we can close this chapter now: USB-C compliant powering means you need a real USB PD compliant charger and since heat dissipation is an issue with RK3399 then you also need to spend additional +20 bucks on their fansink to keep the board cool.
  20. Sorry, I don't find the time to answer to all of this stuff in detail. My opinion on Pinebook and A64 in general: EOS for more than one reason. Also you seem to miss that I used A64 legacy kernel as an example where one developer at least took the time to rebase the vendor BSP mess on top of an outdated mainline kernel version so at least it was possible to get an idea what Allwinner changed where. A64 devices are toys, the majority of users who play with them doesn't care about security anyway. Same could be said about H3 and I really wonder why you don't get how things changed over time. At the end of 2015 Armbian supported just a few boards, H3 devices looked somewhat promising based on features and costs. Back then we took another Allwinner BSP and simply added the missing 3.4.x patches on top (and at the same time few of us already started to use H3 with mainline kernel). To be clear: THIS WAS A MISTAKE (as could be seen just a few months later --> rootmydevice). Why should we repeat mistakes? Since we're totally stupid? Or just too dumb to learn from mistakes? Seriously? Are you able to spot the difference between Rockchip's 4.4 BSP kernel that is based on a clean Linux mainline version (see the +609,000 commits) and RealTek's code drop that is now available without any history? The difference between Rockchip as one the few ARM vendors who learned some lessons pretty fast and became open source friendly while there's nothing (especially zero experiences) now with RealTek? And you talk about standards and 'double standards'? As in 'now we need to support this new platform since board vendor did what he had to do anyway'. Yeah! Sure, let's add more SoC families to Armbian. That's what is truly needed. More boards, less quality. But hey, since this projects suffers from total lack of agreed project goals such useless babbling will happen over and over again. I have no idea why we currently support way too much boards we can handle and why constantly new stuff that requires enormous efforts should be added and why are talking about stuff like this anyway.
  21. Exactly. So no Armbian support for RTD1295 or RTD1296. My last actions around this SoC were pushing BPi folks to stop behaving stupid (switching from "let's wait a bit' blabla to opening sources and hopefully in general moving to an 'release early, release often' cycle) and informing Andreas about the repo. No more interest since the initial efforts are way too high. Let's have another look in 2020. On related news, FriendlyELEC confirmed they're working on a 4 port NAS HAT for NanoPi M4 using Marvell 88SE9235 controller
  22. The card is here: ### quick iozone test: 4 1175 2216 8181 8187 8813 186 That's just 186/4 --> ~45 random write IOPS with 4k blocksize. Seriously way too low for good rootfs performance. The Samsung EVO/EVO+ recommendation isn't valid since 2017 any more. A1 rated cards are the only things to buy. What's also interesting: 'Armbian ramlog partition with lz4 compression' turned into lz4hc later. I need to check this again. On the other hand once the kernels are upgraded to 4.18 or above we'll have zstd anyway so most probably not worth the efforts to analyze...
  23. 'armbianmonitor -u' or it doesn't exist
  24. Sorry, I was confused all the time. We only added the 1.4 GHz OPP back then: And even more confusion since Igor added 1.5 GHz later: Does it work now after updating to 5.60?
  25. Anyone interested in RTD1295/RTD1296 platform would need to do something like this first check out upstream mainline kernel at version 4.9.119 import changes from RealTek's kernel above (you then end up with one giant commit like this -- context) Now you can spend some days or weeks to have a look what's different, where security holes exist (remember Allwinner's 'rootmydevice'?) and what's not properly licensed If this step is skipped there exists no reason to trust into RealTek's 4.9 at all. Especially if this step is skipped it's impossible to start with Armbian support for RTD1295/RTD1295 devices since it would harm the project repeating mistakes like with Allwinner's H3 BSP kernel again (where something like rootmydevice was only discovered by accident). We can't trust Android vendor's BSP kernels since those employees forced to hack driver support into something they're not interested in (the Linux kernel and its concepts) never worked with mechanisms like peer review or code quality control. If those employees would've started to submit patches upstream where all this stuff happens (Linux kernel community) then this would be something entirely different. But RealTek just like Allwinner and the majority of ARM SoC vendors has no interest in properly upstreaming their code so this code can't be trusted. If their stuff is not properly licensed this will most likely prevent developing mainline drivers for the affected hardware (but I doubt anyone will do this anyway since so far only Suse's Andreas Färber worked on the platform -- tried to inform him via CNX but no idea whether he's aware that some RealTek kernel sources are now provided). Having now sources does not mean everything's fine now. Next step is looking at the mess contained (if anyone wants to spend weeks of his life with something like this).