Jump to content

tkaiser

Members
  • Posts

    5462
  • Joined

Everything posted by tkaiser

  1. And another nice example where you can look at pictures or inside the 'technical documentation' and get not the slightest idea what's possible: The Banana folks let someone design another board for them, this time based on RTD1296 (which should be a way better choice for storage/NAS use cases than RK3399 since 'made for the job'). Three times M.2 on this PCB. On the top PCB side there's 2 times M.2 E key (allows for various protocols including PCIe 2.x x2) one time labeled as 'PCIe 2.0' (or on the detail page 'PCIe 2.2' which doesn't exist... but hey, it's SinoVoip who simply don't give a shit about providing correct information) and one time PCIe 1.1/SDIO. What about lanes used? Is it PCIe 2.0 x2 or x1 (hey, just a difference between 500 MB/s vs. 1000 MB/s). On the bottom side that looks like B key so theoretically capable of 'PCIe x2, SATA, USB 2.0 and 3.0, audio, UIM, HSIC, SSIC, I2C and SMBus'. It's labeled 'M.2' which is plain stupid since telling NOTHING but at least SIM slot nearby and the marking suggest that this is USB2 only to insert WWAN modems (which usually appear as a few devices on the USB bus)
  2. And it took him only one year to get from r6p0 to r6p2. If things are progressing further that nicely I assume we'll have 64-bit blobs for Mali450 already in 2019! What about video decoding?!?!! Just kidding though that's what 99% of people associate now with this 'news'. And if I would want to make use of Mali acceleration with the use cases above why would I rely on these horribly slow Mali4x0 thingies anyway? Mali-T6xx seems to me like the (s)lowest hardware to go with...
  3. Maybe just a single use case that could benefit from what happened now only 5 years too late (not talking about this stupid benchmark with rotating horses). And whether the type of license 'allows' these blobs to be used with Mali450 in case these are 64-bit ones (AFAIK it's either or)? https://www.cnx-software.com/2017/09/26/allwinner-socs-with-mali-gpu-get-mainline-linux-opengl-es-support/#comment-546293
  4. Congratulations: http://archive.is/osCuS Cortex-A5 (!), 'it can easily run with the game it support...' (WTF?), on the 'hardware spec' page you're talking about '1 pcie 2.0 interface' and '1* M.2 interface' (which is absolutely meaningless), '2x USB 2.0 OTG' (while it's host), somewhere else PCIe 2.0 becomes 2.2 which does not exist... and so on... Regardless whether your hardware is good or not by providing such an insane collection of BS and still allowing your copy&paste monkey to spit out random words without meaning you prevent anyone right in his mind to support your products (and it's also very hard to believe that this is hardware you designed). Why don't you hire someone able to write technical documentation?
  5. https://www.armbian.com/orange-pi-prime/ Where do you read 'beta' or anything else that's not hidden in a wall of text and tells the user that if they're not developers they should go away (or use at their own risk)? Experimental and nightly are words without meaning. Experimental: People download, burn, start, experiment succeeded. Nightly? WTF, don't care, it's day now. Developers think differently (they know that a nightly build is untested) and fail to reduce the time wasted on useless support. I don't care that much if we use unstable or beta as long as we get rid of 'nightly' which is meaningless. And as we've already seen such motd stuff that appears only once is useless, same with warnings on pages, intros above every forum and so on. We fail to be more drastic with all this time wasting stuff. Why not leaving the motd stuff on, why not changing first login so that prior to user creation on those 'not for end users' images they have to walk through an annoying dialog asking them three simple questions to check whether they understood what they're about to do?
  6. Why should developer speak or Debian release jargon in this case affect users? In my experience users understand and hate the meaning of 'beta' ('Ha, those lazy developers want me to do their work! Wasting my time as betatester!'), they also learned to fear 'x.0' versions and that's the reason in the meantime some (or maybe even many?) commercial software vendors release an early beta as x.0 and the final betas as x.1 to get some valuable feedback from their more adventurous customers out there. If we're not switching perspective no improvement will be possible.
  7. Interesting, thanks. But doesn't change much wrt 'marketing BS' spread by their Kickstarter campaign since everything connector related there is just misleading (and to make use of x4 PCIe you would need M.2 M key anyway, B is limited to x2). So we have two PCIe lanes wasted on the Firefly, most probably 4 on the VideoStrong, no idea about the Xunlong board (there sits an ASM106x around with just one SATA port connected so it would be somewhat strange to waste 2 lanes with an ASM1062 but who knows? Xunlong provides also a mPCIe slot and since distance is far away from the ASM106x I would believe that's mPCIe with another PCIe lane used) and all those RK3399 TV boxes will ignore PCIe for sure. Maybe the only RK3399 device making full use of all 4 PCIe lanes is Theobroma's RK3399-Q7 (combined with the carrier board that provides even a real PCIe slot). I still doubt that performance/price ratio justifies looking at RK3399 when talking about storage/NAS use cases (at least as long as 'USB3-to-SATA' is not considered to be an option).
  8. In the meantime I'm thinking about modifying firstrun script in the following way: Check SD card info and when metadata is questionable, capacity below 8 GB or random IO 16K write IOPS below 25 immediately enter 'self destruction' mode, preventing Armbian from being installed. If SD card has passed the check then enter 120 seconds cpuburn stage to crash the installation on systems with insufficient power supplies. But that's obviously just me, nothing will change and leaving the forum is the better alternative
  9. Huh? I really don't know what you're talking about (PCIe 1.0? Why? 400Gb/s? What?). If you refer to the Firefly Kickstarter page this is just an insane BS collection. I only know that they use an M.2 B key slot which is able to transport a couple of totally different protocols (see here). M.2 B key can either be used for PCIe 2.x x2 or SATA or USB3 (or maybe it's not 'or' but 'and' instead and the highspeed data lines are different ones allowing more than one such a protocol at the same time). Mentioning this theoretical SATA capability by Firefly marketing is of course misleading since RK3399 has no SATA capabilities and for SATA being usable on the M.2 slot there's missing something: a PCIe SATA controller needing another PCIe lane sitting somewhere on the Firefly around (as usual Kickstarter marketing was successful in fooling a lot of users trying out M.2 SATA which simply can not work on this board). Then there's a slot that looks like mPCIe but the problem with this thing is that it can be used with 4 totally different types of cards: mSATA: this allows a directly connected SATA device (not on the Firefly of course) mPCIe: a single PCIe 2.x lane exposed USB 2: only power and pins 36/38 of the mPCIe connector are used and the device is USB only (but will often be marketed as 'PCIe device' -- see almost all such 'WWAN modems') PCIe+USB2: Since PCIe data lines and USB2 data lines are separate (unlike mSATA vs. mPCIe where they are shared) cards could be used that use both PCIe and USB2 What's that thing on the Firefly doing? At least I neither know nor care (but if only 'LTE' is mentioned it could be USB2 only!). RK3399 is a great SoC for many use cases (attach displays, attach cameras, attach USB peripherals and do with it for what it's designed for: consuming media) but unless someone provides real performance numbers when trying to combine the SoC with reasonable storage controllers I think it's better to ignore this potential use case here. And 'providing real performance numbers' is IMO not that easy since needing the necessary equipment and skills (experts in embedded area are usually missing)
  10. Of course. And every occurence of the highly misleading string 'nightly' should be replaced with 'untested_beta' everywhere. In filenames, on download pages, everywhere. And in case a desktop image is provided or the image allows to install such crap we need a new background picture in red with huge black letters on it yelling 'You're using an UNTESTED BETA that only exists to provide constructive feedback. Things are expected to break anytime.'
  11. https://forum.armbian.com/index.php?/topic/838-best-budget-device-as-torrent-box/&do=findComment&comment=6427 TL;DR: transmission-daemon set up to write to storage that is insanely slow wrt RANDOM IO will slow the system that much down that it seems to hang. Whether that's the problem with your setup iozone will show. How/why? Do a web search for 'SD4GB manfid oemid' for example. Everything with only zeroes in manfid/oemid is not genuine but counterfeit/broken or cheap crap in the first place (and if a 'genuine' SD card starts to 'forget' who the manufacturer was then I would assume that it soon also starts to forget the data written to it). C'mon, I asked you to TEST your card for performance and whether it's ok (and of course provide the results). I gave you a link where it's explained how easy this is since it's really just doing this cd $HOME ; iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 armbianmonitor -c $HOME We need these test results to get an idea whether we can ALWAYS assume that if your card shows horribly low random IO performance (which I assume, especially with 16K block size) then it's able to slow your system that much down that it seems to 'hang'. Since we need a way to avoid wasting our time with the same issues over and over again (SD card crappiness and power supply issues).
  12. No, please check https://forum.armbian.com/index.php?/topic/1925-some-storage-benchmarks-on-sbcs/&do=findComment&comment=34192 USB3 is there on more recent ARM SoCs since they can license USB3 IP blocks now and have not to pay too much for this any more. Same with PCIe, it's there, you can connect stuff to it, it allows the IDH who do the job to be creative and add peripherals they couldn't in the past due to not enough high speed buses but that's it. A TV box SoC like RK3328 shows surprisignly an insanely great USB3 performance (and RK3399 will do the same since [1]) but why should this be the same with PCIe on a tablet SoC like the RK3399? Look at the link above and i.MX6: Native SATA II: native transfer rate of 3.0 Gbit/s that, when accounted for the 8b/10b encoding scheme, equals to the maximum uncoded transfer rate of 2.4 Gbit/s (300 MB/s). In reality it's a third we get. But SATA II is an improvement over SATA I (limited to 150 MB/s) since it defined NCQ so workloads with many small disk accesses could benefit greatly from this feature (never tested in detail since i.MX6 is boring slow for my use cases) Native Gigabit Ethernet: Standard talks about a 1Gbit/s signal rate and all we get with i.MX6 when using the internal GbE implementation are ~400Mbits/sec at the IP layer and not ~940 MBits/sec as it should be. Only using the single PCIe 2.0 lane and a good PCIe GbE adapter exceeding 900 Mbits/sec is possible (never tested in detail since i.MX6 is boring slow for my use cases) Native PCIe 2.x: This standard talks about 5 GT/s per lane again with 8b/10b encoding applied so maximum bandwidth would be 500 MB/s not counting any protocol overhead and funnily leaving out all performance issues that are related to (high) latency issues. The result: lowest performance again (check the Hummingboard numbers). There's a SoC made for a specific market, there are some lowspeed and some highspeed interfaces connected to and there are some specifications talking about signal or line rates. This is all not related to real-world performance. An EspressoBin with a dual-core SoC running at just 800 MHz will outperform any tablet or phone SoC when it's about network and/or storage. Since 'made for the job'. I'm not that stupid (any more) to expect from an RK3399 with its PCIe 2.x x4 interface anything near the theoretical 2000MB/s (and focusing on bandwidth only is already stupid like hell anyway ). [1] Talking to an RK engineer few months ago about USB3 issues with ROCK64: 'still confused about this issue, because I test the USB3 drive on my RK3399 SDK (use the same xHCI host controller as RK3328 but different USB3 PHY), and I am surprised to find that it works well on RK3399.'
  13. It seems it's really an ASM1061 (using just a single PCIe 2.x lane) since my ASM1062 is listed as that: root@clearfogpro:~# lspci 00:02.0 PCI bridge: Marvell Technology Group Ltd. Device 6828 (rev 04) 00:03.0 PCI bridge: Marvell Technology Group Ltd. Device 6828 (rev 04) 01:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) 02:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11) So we have a SoC with PCIe 2.1 x4 in the spec sheet, one manufacturer selling an x1 SATA card and zero performance numbers about this setup. Since I've been fooled by specifiations already way too often (best example: the shitty 'native SATA' performance of Allwinner SATA equipped boards like Banana Pis) and since I doubt a tablet SoC like RK3399 has been optimized for PCIe performance at least I'll stay with the devices that were designed for these use cases and where we know that performance is superiour.
  14. I fear I still don't understand your goal (the 'building images' part since if it's about 'Armbian images' then only possible platform is x64). Anyway: if it's about compiling stuff highly parallel then these ressources might be useful: http://wiki.ant-computing.com/Choosing_a_processor_for_a_build_farm#Analysis_on_2016.2F02.2F07
  15. Sorry, I totally fail to understand what this thread is about. Thread title talks about 'No Wifi' but initial post confirms Wi-Fi working and talks about a potentially missing module (Bluetooth and not Wi-fi related). Care to provide 'armbianmonitor -u' output?
  16. Since Armbian only runs on ARM devices while it can only be built on x64 that requirement is impossible https://docs.armbian.com/Developer-Guide_Build-Preparation/#what-do-i-need Many cores/threads help with compilation, fast storage [1] helps with everything else and the more DRAM you have the more you benefit from your host's filesystem buffers (IO not hitting the SSD) [1] both bandwidth and especially latency/IOPS, forget about HDDs here and always use fast SSDs
  17. Just some links / thoughts since I don't feel too comfortable jumping in a thread where ZFS on toys is starting to be discussed: https://forum.armbian.com/index.php?/topic/3946-rk3399-orange/ https://forum.armbian.com/index.php?/topic/3953-preview-generate-omv-images-for-sbc-with-armbian/#comment-32340 When thinking about wasting disks for redundancy at least I would also start to care about data integrity (and buy a HP N54L on eBay instead of playing with large USB attached storage pools, I both don't like bit rot and single points of failure that much) When playing with ZFS on the Clearfog Pro in the meantime after multiplying /sys/module/zfs/parameters/zfs_dirty_data_max by 4 I get +100MB/s write and +250MB/s read with a RAIDZ consisting of 4 SSD. Accessing this zpool when using Ethernet in parallel the board freezes, too lazy to look into yet. But at least based on memory requirements setting on at least ZoL 0.7 seems the way to go: https://github.com/zfsonlinux/zfs/releases/tag/zfs-0.7.0 If I would want to attach more than a few disks in a reliable way to a somewhat affordable ARM board my choice would be the Helios4 (2GB ECC DRAM, 4 native SATA ports which are all able to be paired with quality Marvell FBS -- FIS-based switching -- SATA port multipliers) But if performance is also a concern I would wait a few weeks/months and grab one of the new Denverton mainboards that support QAT (QuickAssist technology) to be paired with ZoL 0.7 or above BTW: I think the probability is high that at least the VideoStrong thingie does not expose PCIe at all and the mPCIe slot there is just useable with those WWAN LTE modems (mPCIe formfactor but only utilizing power and USB data lines on pins 36/38 of the mPCIe connector)
  18. You were the one running transmission? Since I've seen no connected USB storage I would assume that's the culprit since on a counterfeit/crap SD card this can't work well (name suggests SanDisk while oemid/manfid show 'noname'): ### mmc0:0001 info: cid: 0000005344344742000001a46c00f137 csd: 400e00325b5900001dbf7f800a400085 scr: 02b5800000000000 date: 01/2015 name: SD4GB type: SD preferred_erase_size: 4194304 fwrev: 0x0 hwrev: 0x0 oemid: 0x0000 manfid: 0x000000 serial: 0x0001a46c uevent: DRIVER=mmcblk MMC_TYPE=SD MMC_NAME=SD4GB MODALIAS=mmc:block erase_size: 512 I would check performance as per: https://forum.armbian.com/index.php?/topic/954-sd-card-performance/ and also run 'armbianmonitor -c $HOME' (as normal user so not as root / without sudo)
  19. Not necessary since contained in your link: http://sprunge.us/eMED The dvfs table is wrong since not allowing to clock below 648 MHz while /etc/defaults/cpufrequtils tries to idle at 240 MHz. We did not change anything to these settings within the last year: https://github.com/armbian/build/commits/master/config/fex/orangepizero.fex so for the ARISC errors the 'I tried running the fix thermal issues script that I found on this site' is responsible. I adjusted http://kaiser-edv.de/tmp/H9rWPf/fix-thermal-problems.sh right now since it's a historical piece of code suited to fix the other OS images that users had to use prior to Armbian. Please try this to recover from the situation: ln -sf /boot/bin/orangepizero.bin /boot/script.bin then make your h3consumption adjustments, reboot and check/report. If this does NOT work, check for /boot/script.bin.bak and replace script.bin with it or even download latest version of orangepizero.fex from Github, convert it using 'fex2bin /path/to/orangepizero.fex /boot/script.bin' and then follow the above steps.
  20. Answered by someone else in the meantime: https://www.cnx-software.com/2017/09/25/checking-out-debian-and-linux-sdk-for-videostrong-vs-rd-rk3399-board/#comment-546250
  21. As already said: you use wrong settings and don't comment on which/why? If you would have provided 'armbianmonitor -u' output from your image we would already know and could check how you currently operate your board. If you get ARISC errors you most probably chose the wrong settings for a board equipped with I2C voltage regulator and these are afaik clocked at 1008 MHz by u-boot. So a difference might be that you're currently running all the time at 1008 MHz while with an image using correct settings cpufreq would be switching between 240 and 1200 MHz. But without logs this all is just an absurd waste of time
  22. Of course, type of antenna starts to matter if distances get larger or 'line of sight' is not given. Just as an extreme example: Almost half a km with an ESP8266 and appropriate antennas: I did some testing since yesterday and got a nice 'performance to stability' correlation but it's not a causation and I really think we should not talk about performance when we're looking for stability. I've 2 router/AP at home (Fritzboxes from provider, one since I 'needed' ISDN and the other one since the ISDN capable was too slow to saturate a 100 Mbits/sec connection as router -- so the first Fritzbox terminates DSL, does telephony and acts as DSL modem for the 2nd Fritzbox). I remembered hearing some rumours about the one that I use just as a modem being unstable and decided to test. 'Iperf -s' on the box running I had no trouble to let an 'iperf -c -t 3000' run from my MacBook either from neighbours or my own flat (difference: due to distance throughput is ~3MB/s at my neighbours location and +10MB/s in my flat). When starting to continually download archives from the Internet this worked quite well in my neighbour's location but I got kicked out of Wi-Fi after less than 10 minutes with MacBook near to the AP. The whole box somehow locked up and based on touching the surface I would assume it's an overheating issue if the AP has also to play the 'Internet access router' role with NAT and some very basic firewall stuff. If the 2nd Fritzbox played AP/router utilizing the first one only as DSL modem I could download stuff with ~10 MB/s for hours (stopped after 2 hours). In other words: With a 'good performance' setup (wireless client close to AP acting also as a router) I can trigger an instability issue but the problem lives inside the AP/router box and not the client (and it's not an issue as long as the traffic just bridges LAN/WLAN but the router/NAT engine in this thingie locks up once Internet is accessed at high throughput speeds). IMO we should differentiate between performance, stability/reliabity and distance when we explore Wi-Fi further (taking also 'environment' into account and also the role both sides of the connection play)
  23. If you get ARISC errors then you use wrong settings anyway (most probably created/chosen an image for the wrong board, one with I2C accessible voltage regulator). I don't really remember but most probably in such a situation with wrong voltage regulator settings the board does not do cpufreq scaling but remains at the clockspeed set by u-boot (that's what monitoring is for, use either RPi-Monitor or 'armbianmonitor -m' to check). 3 affected people and not a single log. Regardless which thread I visit here in the forums there's always a huge 'Before reporting problems with your board running Armbian, check the following:' intro at the top asking for 'armbianmonitor -u' output.
  24. Situation with 'thermal performance' explained:
  25. TL;DR: The small H2+/H3 boards unlike their bigger siblings are all prone to overheating due to smaller PCB size (on the larger boards the PCB's groundplane acts somewhat as a large heatsink dissipating heat away from the SoC). Due to mainline kernel settings not being optimized currently all these boards are slower under constant load compared to legacy kernel. This should change but won't unless someone is looking into it and spends some time on this. Two areas that deal with this overheating tendency or are somewhat related are thermal protection / throttling: use the thermal sensor(s) inside the SoC to downclock various engines if specific tresholds are exceeded DVFS (dynamic voltage frequency scaling). All the small boards have either no voltage regulation (NanoPi NEO2) or a primitive one only switching between 1.1V and 1.3V With sun8i legacy kernel Armbian and linux-sunxi community members spent a lot of time and efforts on improving thermal/throttling performance. Read through the following as a reference please: https://github.com/armbian/build/issues/298 The result of our optimizations was a lot of better performance compared to Allwinner's defaults (that targeted only Android and preferred higher single thread performance over overall better performance, with Allwinner settings on an overheating system you could end up with just one or two active CPU cores pretty easily). Now with mainline kernel situation for the larger H3 boards is ok-ish (those boards have an I2C adjustable voltage regulator, voltage switching works fine grained, overheating isn't much of an issue anyway and performance is almost as good as with legacy kernel). But situation with the smaller boards needs some attention. If we run the cheapest boards currently with mainline kernel then we're talking about these settings: max cpufreq 1008 MHz (at 1.3V), next lower cpufreq 816 MHz at 1.1V, then 624/480/312/240/120 MHz defined 4 thermal trip points defined starting at 65"C with throttling, then using 75° and 90°C and shutting board down when 105°C are reached. With Armbian and using legacy kernel it's the following instead: max cpufreq is 1200 MHz, then 1008 MHz still at 1.3V, at 912 MHz we switch to 1.1V and below are a few other cpufreqs available between 816 MHz and 1344 MHz Armbian's legacy kernel provides cpufreq steps every 48 MHz (allowing for fine grained throttling) On the small boards we use twice as much thermal trip points as mainline settings and our strategy is to switch to 912MHz@1.1V pretty early once throttling occurs These differences result in both lower 'normal' performance (since mainline kernel limits also single threaded tasks to 1GHz instead of 1.2GHz) and also 'full load' performance since DVFS/THS/throttling settings are not optimal and once the board reaches the first thermal trip point throttling is not that efficient compared to legacy. It's easy to test: grab an OPi Zero, NanoPi Duo or any of the other H2+/H3 boards with primitive voltage regulation, then grab an Armbian OS image with legacy kernel (3.4.113 using fex settings) and one with mainline kernel. Execute on both sudo rpimonitor -r (installs RPi-Monitor so you can enjoy nice graphs when connecting with a web browser to port 8888 of your machine) sudo rpimonitor -p (installs cpuminer which is a great tool to heatup your board and also to measure 'thermal performance' since spitting out khash/s values in benchmark mode minerd --benchmark (this is the actual benchmark running) With mainline kernel performance is lower. Expected result: same performance. What to do? Improve mainline settings. BTW: Mainline settings currently are as they are since these were the values megi started with last year. Once numbers exist they're only dealt with copy&paste any more.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines