Jump to content

tkaiser

Members
  • Posts

    5462
  • Joined

Everything posted by tkaiser

  1. Those people implemented this already. Just call armbianmonitor without arguments to understand -c (check your SD card for counterfeit/fraud issues) and -v (verify installation -- the latter only available on new images now). BTW: Since you successfully verified that Armbian 'just works' (on another SD card than the one of your broken installation) now you might also understand why Bananian and Raspbian 'just worked'?
  2. How? With one of those 'PSP charger cables' which often have too tiny wires? Anyway, this thread should be moved to https://forum.armbian.com/forum/31-sd-card-and-power-supply/
  3. A minute would've been sufficient if armbianmonitor output would contain both idle and stress situation (yours shows only the latter). This would've allowed to compare input voltages but in your installation powering looks sufficient (still at 5.2V when stress is running). So your instabilities seem not to be related with two of the three major problems that occur with Micro USB powered Bananas. Maybe your installation on the other card is already broken, maybe the SD card there has a problem. Would be interesting to see logs (armbianmonitor -u) from your original installation and also whether you're able to clone the card (on another system using ddrescue for example) and then check it with either F3 or H2testw (if you're running off a counterfeit card with faked capacity your symptoms would also match)
  4. If you want to focus on sequential performance (which is most probably NOT what you should do), then keep in mind that on H3 boards the sequential maximum is ~23 MB/s. So you're talking about '23/20 or 23/10' anyway and it's pretty easy what to prefer though the numbers above look somewhat bogus anyway. Some background info (and why most probably looking at random IO is way better which needs a test on the device itself): https://forum.armbian.com/topic/954-sd-card-performance/
  5. Just try it out and report back. Currently you're not even aware of the problem as almost all users (Micro USB being such a shit show) since you talk about an USB cable between your board and disks while it's about the USB cable between your PSU and your board (this is where the voltage drop happens). http://linux-sunxi.org/Powering_the_boards_and_accessories#Micro_USB http://forum.lemaker.org/forum.php?mod=viewthread&tid=8312&extra=page%3D1
  6. Which might imply that your whole installation on the SD card is already corrupted as a result of countless 'hard reboots'. It's useless to diagnose at this stage and I doubt anyone is interested in diagnosing it since boards with Micro USB for DC-IN are a support nightmare anyway. You can try to repeat your tests with a fresh Armbian install but unless you fix your underpowering problems it won't help. Most easy way to diagnose underpowering problems is to download latest nightly build from here, burn it on a separate SD card, boot the system with the usual stuff connected and then simply run the following in one terminal and few seconds later start 'stress -c 2 -m 1' in another terminal and watch what happens: tk@lime2:~$ sudo armbianmonitor -m Stop monitoring using [ctrl]-[c] Time CPU load %cpu %sys %usr %nice %io %irq PMIC DC-IN 17:24:47: 720MHz 0.11 3% 1% 1% 0% 0% 0% 40.8°C 4.77V 17:24:52: 528MHz 0.10 3% 1% 1% 0% 0% 0% 40.2°C 4.74V 17:24:57: 528MHz 0.09 3% 1% 1% 0% 0% 0% 40.2°C 4.75V 17:25:03: 528MHz 0.08 3% 1% 1% 0% 0% 0% 40.4°C 4.76V^C (we added voltage monitoring to Armbianmonitor recently since boards with Micro USB for DC-IN are such a support nightmare but this will only work on most recent OS images)
  7. Hmm... but "we" (Armbian devs) can not test for instabilities (since sample size way too small) and users will not report 'DRAM reliability' issues but problems that sound exactly like our favourite issues which we can't influence (powering problems and SD card crappiness). Just as a reference (not meant to encourage further discussion but for commit message) http://linux-sunxi.org/Orange_Pi_PC#DRAM_clock_speed_limit. 24 devices participating, 5 not working reliably at 672 MHz but only at 648 MHz max. That's more than 20% failure rate and justifies switching to the next lower possible value since performance differences are negligible while affected reliability should be the real concern.
  8. Why are we trying to use 672 or 648 MHz in the first place? Since there's somewhere a value in a 3.10 DT or sys_config.fex that gets ignored by BSP anyway? Hmm... https://irclog.whitequark.org/linux-sunxi/2017-11-17#20573274; -- I can only see that 672 is bad, while 576 and 624 are good. But most probably I missed something... I don't want to force any fixes upstream since IMO that's just a waste of time. The submitted numbers there are the result of copy&paste gone wrong and no one cares. We wasted one and a half year ago an insane amount of time to test through these issues and it should be common sense that 672 MHz are not reliable. This testing doesn't happen 'upstream' (the only time it happened the results were easy to interpret: stay away from 672 MHz since it will cause issues on some boards) and submitters prefer 'performance' over reliability.
  9. Ok, that was most probably not the best idea. I set up a monitoring script running in a screen/SSH session and that ended like this after 3 hours testing below a pillow: ok 91.5°C ok 90.3°C ok 91.4°C ok 92.4°C ok 92.4°C ok 90.6°C ok 92.3°C ok 90.9°C ok 92.2°C ok 91.8°C ok 92.6°C ok 91.6°C ok 93.0°C ok 93.4°C (parsing memtester's nohup.out for 'fail' and reporting SoC temp every 3 minutes). Then reporting stopped, I burned my fingers (since I forgot that I cooked the whole setup, NAS Expansion board I power Zero Plus through included), then I had to realize that the SD card died Anyway: This ran since yesterday at 672 MHz without memtester reporting memory corruption, any objections to use 624 MHz now for OPi Zero Plus and also NEO2 and NEO Plus2? @zador.blood.stained: you set DRAM clockspeed for OPi PC 2 and Prime to 648 MHz. @Icenowy IIRC yesterday reported in linux-sunxi IRC that she had problems at this clockspeed and needed to go a little lower. Should we use 624 MHz here too?
  10. And the kernel he used back then is horribly outdated, contains vulnerabilities like 'rootmydevice' and 'dirty COW' but no driver for onboard Wi-Fi. Also the thermal settings will fry your device. You won't have fun with this even if anyone explains you how to fiddle around with script.bin (which every search engine will tell you anyway) so better think twice about what you want to do. I would start with security basics first instead of looking for 'KALI-TOOLS' but this is not focus of this forum anyway.
  11. That's not a SATA port but a broken USB-to-SATA bridge bethind an internal USB hub (both need power). So basically USB config is different and usually usb2 deactivated on OPi Plus (2) since there the USB receptacles are behind the internal USB hub while needed on OPi Plus 2E to be able to use all USB ports. Just check fex entry for usbc2 at https://github.com/armbian/build/tree/master/config/fex (which is also a suitable way to adopt outdated H3 board OS images for other H3 boards --> exchanging script.bin though I would really not recommend this especially with stuff like 'Kali' since focusing on security which is simply a bizarre joke with the exploit ridden laughable Android 3.4.39 kernel the available images rely on)
  12. As it's on almostall systems with a working network connection since on Armbian NTP is active by default. Please update the first post with this important piece of information for others trying to solve the same problem that is only one for stand-alone systems without any network connection these days
  13. Care to report the instabilities you will now run into (before others find this thread and think this would be a great idea)? You could start with https://github.com/ehoutsma/StabilityTester but would need to adjust both MAXFREQUENCY and the REGULATOR_* variables first.
  14. Doesn't matter, that's what the '### Installed packages' entry is for Since I don't understand what's happening on your installation I don't think so. You're running off eMMC, there's no SD card present, the boot environment doesn't look right and there's an older nand-sata-install.log lying around. I would need to understand this first...
  15. Strange. The output neither reflects 'DEFAULT_OVERLAYS="usbhost2 usbhost3"' set in the meantime, nor DRAM being clocked down to 624 MHz and there's also 'nand-sata-install.log: Thu Oct 12 10:03:09 UTC 2017: Start nand-sata-install.' included. O, wait. I just tried it again on my OPi Zero Plus now that cpufreq scaling is back (that happened yesterday after some updates after a reboot and now after new updates + reboot everything is back again) and now tinymembench shows with 672MHz: standard memcpy : 925.3 MB/s (0.6%) standard memset : 2564.2 MB/s Ok, looking at tinymembench numbers is useless if we not also know cpufreq settings when the test was running...
  16. Can you please provide output from 'armbianmonitor -u' too? Just to be sure since it looks like your DRAM is clocked with 672 MHz and we added a slight downclock 2.5 weeks ago (most probably only part of the beta repository yet).
  17. My bad. I totally overlooked that the above u-boot version requires an OS image with DT for Orange Pi Zero Plus already present (and that's only the case on self-built images or latest nightlies built yesterday or later). Sorry for the hassles! @willmore In case you want to give it a try then please only try it out based on this nightly. Test scenario with 672 MHz DRAM clock is to run endlessly tinymembench and memtester in parallel (works here since 12 hours without problems, I'll later add a pillow to the setup just as I did yesterday when running the same stuff with BSP to ensure SoC and DRAM temperatures are 95°C or above).
  18. Sure. Since I assume you're also running a 'nightly' here are the official u-boot debs (using 408 MHz): https://beta.armbian.com/pool/main/l/linux-u-boot-nanopineo2-next/ And here at the bottom are variants with 624 MHz and 672 MHz but testing is only useful with the latter since there's some safety headroom needed (so if 672 MHz works then choosing 624 MHz for productive usage would be an idea). But please be aware that this is reliability testing and crashes/freezes are to be expected if we're talking about worst case here. I've now two scripts running in parallel, the one for tinymembench looking like this: root@orangepizeroplus:~/tinymembench# cat tinymembench.sh #!/bin/bash while true ; do ./tinymembench done
  19. Thank you. Same numbers as with OPi Zero Plus at 408 MHz. I just prepared an u-boot variant that clocks DRAM with 672 MHz to let the OPi Zero run overnight memtester and tinymembench in parallel (I checked it at 624 MHz before with both mainline and BSP for at least half an hour and no memory corruption reported). Maybe you can give it a try too on a spare SD card? It's just installing with 'dpkg -i' and rebooting.
  20. Ok, switching from 408 MHz DRAM clock to 624 MHz on OPi Zero Plus (still cpufreq limited since DVFS not working, I'm running at 1008 MHz but can consider myself lucky since we've seen H5 boards that get instable at 1008 MHz with 1.1V VDD_CPUX for sure): Test BSP Armbian old Armbian new std memcpy MB/s: 887.9 634.8 875.1 std memset MB/s: 2037.9 1553.0 2252.7 7z comp 22: 1288 1234 1515 7z comp 23: 1344 1279 1637 7z decomp 22: 3296 3329 3832 7z decomp 23: 3215 3317 3768 sysbench 648 (s): 14.4798 14.1447 14.1378 sysbench 816 (s): 11.4151 11.2191 11.1106 sysbench 1008 (s): 9.2395 9.0787 8.9818 openssl speed aes: almost identical So once DVFS is working there will be another slight performance increase in a few weeks.
  21. Will never work: https://forum.armbian.com/topic/4635-network-manager-fails-on-upgrade-from-debian-jessie-to-stretch/?do=findComment&comment=35096
  22. Another update since I think we're running partially in a wrong direction. Last year we discovered that the small H3 boards with single bank DRAM configuration like NanoPi NEO or OPi Zero showed pretty low memory bandwidth and a tendency to overheat when DRAM has been clocked higher (especially there was a huge consumption/temperature jump when switching from 408MHz DRAM clock to 432MHz) and overall idle consumption on a NanoPi NEO variied by 470 mW only depending on DRAM frequency (comparing 132 MHz with 624 MHz back then). That's why we set DRAM clockspeed to 408 MHz by default on the small Allwinner boards. Since I can't remember that we verified that with a H5 board yet I simply gave it a try and compared idle temperature of OPi Zero Plus with 408 MHz DRAM clock vs. 624 MHz: Less than 2°C difference. That's nothing. So I tested again with 624 MHz DRAM clock (but with Debian Stretch so forget about comparing sysbench results with anything above since different compiler --> different results): https://pastebin.com/4iM6DwJM Memory bandwidth is a lot better of course and if we compare with H3 boards below we see that even single bank DRAM configs with H5's memory controller outperform the H3 numbers (the inner numbers are 32-bit memory configs and the outer 16-bit, it's the 'standard memcpy' value shown): DRAM clock NanoPi NEO OPi Lite OPi Plus 2E OPi Zero Plus 408 MHz: 433.5 MB/s 594.8 MB/s 580.5 MB/s 634.8 MB/s 624 MHz: 484.8 MB/s 892.9 MB/s 905.5 MB/s 875.1 MB/s Can users with access to other H5 devices please provide tinymembench numbers? Especially interested in OPI PC2 or Prime and NanoPi NEO2 or Plus2. It's as easy as git clone https://github.com/ssvb/tinymembench cd tinymembench/ make ./tinymembench
  23. Nah, but let's try another 'joke'. Can you imagine how the lower PCB side of the new small Xunlong H6 boards might look like if checking the size of a mPCIe Marvell SATA controller and OPi One/Lite: (disclaimer: I've really no idea what Xunlong is planning, just a thought since H6 has PCIe so why not trying to expose the interface? Maybe just routing/preparing the 52 pins and not soldering the connector by default?)
  24. After 7 months of work on this now let's close the chapter. Today I generated the last OMV image based on Armbian for a little nice guy who arrived yesterday (since the board followed the usual conservative Xunlong hardware development style it was easy to add support within 24 hours: half an hour on generating the wiki page, some basic benchmarks to get behaviour/performance with BSP, editing the wiki page a bit. Now it's added to the build system and only remaining issues are Wi-Fi driver -- me not interested in at all -- and DVFS / voltage regulation to kick this little guy up to 1.2 GHz cpufreq). I waited for this test over 9 months now (Orange Pi NAS Expansion board arriving and yesterday finally the GbE enabled companion). Prerequisits: Xunlong's NAS Expansion board needs firmware updates first for top performance (though still USB2 only so we're talking about slightly above 40MB/s as maximum here!) Since the application in question needs some fault tolerance a redundant RAID mode has been set as mandatory. While I really hate the combination of RAID-1 with incapable filesystem I chose RAID-10 with just 2 USB disks and far layout with a btrfs on top (btrfs then provides data integrity, snapshots and somehow also backup functionality) Since the average data on this appliance will benefit from transparent file compression (1:1.4) btrfs is used with 'compress=lzo' for the below benchmarks (which is kinda stupid since benchmarks using highly compressable test data will perform totally different than real-world data. But I was looking for worst case scenario and thermal issues now) So I let the build system generate an OMV image for Orange Pi Zero Plus automatically, booted it and adjusted only one value: max cpufreq has been lowered to 912 MHz to ensure operation at the lowest consumption/thermal level (keeping VDD_CPUX all the time at 1.1V) Such a RAID10 accross 2 USB2 disks is limited to ~40MB/s write and 80MB/s read with cheap Allwinner boards if they can run mainline kernel to make use of UAS (USB Attached SCSI). Network performance of a H5 board with 912 MHz is limited at around 750/940 MBits/sec and then there's also some compression related CPU utilization when making use of transparent file compression. This is a standard LanTest on this construct (with real-world data the write performance would be a lot lower but for my use cases this doesn't matter that much): Now I pulled the plug of one of the 2 disks to turn the RAID10 into a degraded RAID10 without any redundancy any more: As we can see performance is not affected at all but that's just the result of using a benchmarking setup that's not focused on highest storage performance but on thermal performance (with real-world data read performance would've dropped significantly). The H5 on OPi Zero Plus wears a 16x16mm heatsink with sufficient airflow around. Ambient temperature is at ~24°C, H5 idles at 40°C. With the above benchmark/settings maximum SoC temperature reported was 62°C when H5 clocked all the time at 912 MHz. Now let's look at a RAID resync. I attached another disk appearing as /dev/sdc now, added it to the RAID10 and immediately the resync started to regain full redundancy: (with real-world data also write performance would've dropped dramatically but I was interested in thermal results and the whole time during the rebuild while benchmarks were running H5 reported 60°C max). Due to ARMv8 AES crypto extensions available H5 should be able to serve as VPN endpoint too without sacrifying above performance so it really looks promising for those small boards being used in such an appliance (replacing some +40TB NAS devices again ) Some technical details: RAID10 has been created as in the link above, the btrfs has been created with defaults, only change: adding 'compress=lzo' to mount options. After removing one disk, adding the new one and starting the resync with the benchmark running in parallel the array looked like this: root@orangepizeroplus:/srv# mdadm --manage /dev/md127 -a /dev/sdc mdadm: added /dev/sdc root@orangepizeroplus:/srv# cat /proc/mdstat Personalities : [raid10] md127 : active raid10 sdc[2] sda[0] sdb[1](F) 117155288 blocks super 1.2 4K chunks 2 far-copies [2/1] [U_] [>....................] recovery = 0.1% (191092/117155288) finish=51.0min speed=38218K/sec bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> root@orangepizeroplus:/srv# mdadm -D /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Nov 16 17:43:32 2017 Raid Level : raid10 Array Size : 117155288 (111.73 GiB 119.97 GB) Used Dev Size : 117155288 (111.73 GiB 119.97 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Nov 16 21:16:51 2017 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Layout : far=2 Chunk Size : 4K Rebuild Status : 0% complete Name : orangepizeroplus:127 (local to host orangepizeroplus) UUID : 1eff01f2:c42fc407:09b73154:397985d4 Events : 1087 Number Major Minor RaidDevice State 0 8 0 0 active sync /dev/sda 2 8 32 1 spare rebuilding /dev/sdc 1 8 16 - faulty root@orangepizeroplus:/srv# cat /proc/mdstat Personalities : [raid10] md127 : active raid10 sdc[2] sda[0] sdb[1](F) 117155288 blocks super 1.2 4K chunks 2 far-copies [2/1] [U_] [=>...................] recovery = 7.6% (8905052/117155288) finish=106.7min speed=16898K/sec bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> Without heavy disk activity in parallel the resync performs between 30 and 39 MB/s.
  25. Since I prepare a little NAS test with Orange Pi Zero Plus today I upgraded the firmware of the other JMS578 on the NAS Expansion board. Side effect: with the more recent firmware the USB-to-SATA bridge also provides more real information from the connected disk instead of itself. Prior to the firmware upgrade only /dev/sda shows nice information: After I upgraded the 2nd JMS578 (needs a full shutdown and removing power after the upgrade!) now /dev/sdb also reveals more information (though the drive's serial still bogus):
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines