manuti reacted to tkaiser in Orange Pi Zero went to the market
Sweet! I hope the pins on the TRRS jack are still the same (and the AV cable they sell is still wrong in the same way as before )
The next such HAT should add 2 JMS568 USB-to-SATA bridges and provide one normal SATA jack and one mSATA: The Zero NAS
manuti got a reaction from lanefu in Orange Pi Zero went to the market
Orange boys are very busy nowadays new add-on board for 2$ for the Orange Pi Zero https://es.aliexpress.com/item/New-Orange-Pi-Zreo-Expansion-board-Interface-board-Development-board-beyond-Raspberry-Pi/32770665186.html?detailNewVersion=&categoryId=200004017
manuti reacted to tkaiser in Btrfs as root filesystem?
One minor remark: Based on some testing it seems like a great idea to use "-o compress=zlib" when transferring the files to the freshly created btrfs filesystem. Saves a lot of space and should even slightly improve filesystem performance on dual core or better equipped boards. Some details: https://github.com/igorpecovnik/lib/commit/b14da27a4181e8e232bd8f526e71d2a931a8252f
manuti reacted to zador.blood.stained in Which image for bananapi pro : ubuntu xenial or debian jessie ?
For video decoding acceleration you need to install only libvdpau-sunxi1, other packages will be resolved as dependencies.
You don't need to rebuild mpv on Jessie if I remember correctly, unless you want to use OSD and subtitles.
xserver-xorg-video-fbturbo is not related to video acceleration, but installing it (together with libmali-sunxi-r3p0 or without it) should enable additional Xorg related acceleration
manuti reacted to tkaiser in Clean the filters in Download page
IMO we should switch back to 'by date' as default. With current sorting new users accessing this page get the impression that this here is all about boring Bananas while the average potential new Armbian user searches for 'hot stuff' (like OPi Zero in this case).
manuti reacted to tkaiser in Orange Pi Zero went to the market
Some success regarding Wi-Fi:
So currently it's necessary to blacklist dhd module (if not things get weird when trying to load xradio_wlan) and firmware files need a proper location. Since I'm not that much interested in Wi-Fi I'll stop here
manuti reacted to Igor in Clean the filters in Download page
We actually added a test Vanilla build which was soon removed. Reason: people failed to read that it's a preview image, where many thing does not work and that we don't provide any end user support. Further people failed to read, that I don't provide support via email or other private channels. To lower related frustration, we simply removed those images from the download and now they are accessible elsewhere, where is little harder to find. Technically those boards have (limited at this stage) mainline support.
I expect we'll put those images back soon, so I don't change this flag. Perhaps I should ?
manuti reacted to tomsaul in Lost desktop
FWIW - in case others have a similar issue, the problem was quite simple - the system 'disk' was full. Even though there was enough space to do other tasks, and start the UI manually, it apparently didn't have enough to start at boot. After space was cleared, it resumed working as expected.
manuti reacted to killwill in Set static IP in jessie server on orange pi one
edit : /etc/network/interfaces
# start ### # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface # allow-hotplug eth0 # iface eth0 inet dhcp # Static IP address auto eth0 # ou auto enp0s7 (see ifconfig) iface eth0 inet static address 192.168.25.3 netmask 255.255.255.0 gateway 192.168.25.1 # end ### and change : /etc/resolv.conf domain domain.name search domain.name nameserver 18.104.22.168 nameserver 22.214.171.124 # for your preference
manuti reacted to killwill in After update for linux-image-sun8i (5.20) over (5.14) - My opione not work
SOLVED !!! after dist-upgrade and before of reboot : sudo apt-get install -f be happy thanks to Igor and tkaiser
manuti reacted to Igor in Wayland on ARM SBCs
We (most work was done by Zador) just spent months to rework desktop that now have more features and its properly packed that can be installed on the top of CLI and upgrade, hopefully, works fine. We are releasing it within one week.
Armbian tends to focus on base problems, server / iot functionality, since most of those boards have problems in the ground level and it's irrelevant if we add on it's rotten base an "smooth, high quality desktop". This is what some board manufacturers might do to impress their potential buyers.
We are happy with current level of smoothness and quality of the desktop but project is open and we will support any initiative from outside.
manuti got a reaction from slinde in Most suitable Web Browser
For me, the best H3 web browsing experience is with Firefox on a Beelink X2 using Armbian on the internal NAND. The difference between Beelink X2 (1GB RAM + armbian on NAND) vs Orange Pi One (512MB RAM + Samsung EVO) is very significant, is not like my core i5 with 4GB of RAM but is more or less decent.
manuti reacted to slinde in Beelink X2 with armbian possible?
Had a bit of time today to do testing.
Downloaded a fresh copy of the Beelink X2 Jessie server image file Armbian_5.14_Beelinkx2_Debian_jessie_126.96.36.199z. Unpacked it and burnt to a micro-SD card. Put the card in my Beelink X2 and switched the power on. The only thing connected to the Beelink X2 is power and network cable.
Of course it booted just as expected and performed the dual boot it should do on first boot. So there is no fault in the ARMbian image file.
Here are my observations of the LED behaviour. By LAN led I mean the led on the switch port my Beelink X2 is connected to. I find observing the led on the switch to be the best way to know what is happening.
1st boot: solid purple solid blue LAN led on flashing blue/purple LAN led off flashing red/blue 2nd boot: solid purple solid blue LAN led on flashing blue solid blue ready to log in via ssh
manuti reacted to Igor in wicd gets erroneous BAD PASSWORD
By adding those two files to:
~/.local/share/keyring/default Default_keyring ~/.local/share/keyring/Default_keyring [keyring] display-name=Default keyring ctime=1473189692 mtime=0 lock-on-idle=false lock-after=false Solves the problem. Wifi passwords are getting stored as plain text and we are not asked for anything regarding this.
manuti reacted to tkaiser in Some storage benchmarks on SBCs
Now a real world example showing where you end up with the usual 'benchmarking gone wrong' approach. Imagine you need to set up a web server for static contents only. 100 GB of pure text files (not that realistic but just to show you how important it is to look closer). The server will be behind a leased line limited to 100 Mbits/sec. Which SBC to choose?
The usual benchmark approaches tell you to measure sequential transfer speeds and nothing else (which is OK for streaming use cases when DVD images several GB each in size are provided but which is absolutely useless when we're talking about accessing 100 GB of small files in a random fashion -- the 'web server use case'). Then the usual benchmark tells you to measure throughput with iperf (web serving small files is about latency, that's quite the opposite) and some silly moronic stuff to measure how fast your web server is using the loopback interface of your server and test tool on the same machine not testing network at all (how does that translate to any real world web server usage? Exactly: not at all).
If we rely on the passive benchmarking numbers and have in mind that we have to serve 100 GB at a reasonable cost we end up thinking about an externally connected HDD and a board with GbE (since iperf numbers look many times faster than Fast Ethernet) and a board that shows the highest page request numbers testing on the local machine (when the whole 'benchmark' turns into a multi-threaded CPU test and has nothing to do with web serving at all). Please don't laugh, but that's how usual SBC comparisons deal with this.
So you choose from the list above as storage implementation an external 500 GB HDD since USB performance looks ok-ish with all boards (+30 MB/s), and NanoPi M3 since iperf numbers look nice (GbE) and most importantly it will perform the best on the loopback interface since it has the most and the fastest CPU cores.
This way you end up with a really slow implementation since accessing files is more or less only random IO. The usual 2.5" notebook HDD you use on the USB port achieves less than 100 IOPS (see above result for USB HDD on Banana Pro with UASP incapable enclosure). By looking at iperf performance on the GbE interface you also overlooked that your web server is bottlenecked by the leased line to 100 Mbits/sec anyway.
What to do? Use HTTP transport stream compression since text documents show a compression ratio of more than 1:3, many even 1:10, (every modern web server and most browsers support this). With this activated NanoPi now reads the text documents from disk and compresses it on the fly and based on a 1:3 compression ratio we can stream 300 Mbits/sec through our 100 Mbits/sec line. Initially accessing files is still slow as hell (lowest random IO performance possible by choosing USB HDD) but at least once the file has been read from disk it can saturate the leased line.
So relying on passive benchmarking we chose a combination of devices (NanoPi M3 + 500 GB HDD) that costs +100$ considering also shipping/taxes and is slow as hell for the use case in question.
If we stop relying on passive benchmarking, really look at our use case and switch on our brain we can not only save a lots of money but also improve performance by magnitudes. With an active benchmarking approach we identify the bottlenecks first:
Leased line with 100 Mbits/sec only: we need to use HTTP content-stream compression to overcome this limitation Random access to many files: we need to take care of random IO more than sequential transfer speeds We need to tune our network settings to make the most out of the sitiuation. Being able to use the most recent kernel version is important! We're on a SBC and have to take care of CPU ressources: so we use a web server with minimum ressources and should find a way to avoid reading uncompressed contents from disk to immediately compress it on the fly since this wastes CPU ressources So let's take an approach that would look horribly slow in the usual benchmarks but improves performance a lot: An Orange Pi One together with a Samsung EVO 64 GB as hardware, mainline kernel + btrfs + nginx + gzip_static configuration. Why and how does this work?
Orange Pi One has only Fast Ethernet and not GbE. Does this matter? Nope, since our leased line is limited to 100 Mbits/sec anyway we know that the cheap EVO/EVO+ with 32/64 GB perform excellent when it's about random reads. At 4K we get 875 IOPS (3500 KB/s, see comparison of results), that's 8 times faster than using an external USB HDD we use pre-compressed contents: that means a cron job compresses each and every of our static files and creates a compressed version with .gz suffix, if nginx communicates with browsers capable of that it delivers the already compressed contents directly (no CPU cylces wasted, if we configure nginx with sendfile option not even time in userspace wasted since the kernel shoves the file directly to the network interface!). Combine the sequential read limitation of SD cards on most boards (~23MB/s) with an 1:3 compression ratio and you end up at ~70MB/s with this trick. Twice as fast as uncompressed contents on an USB disk unfortunately we would also need the uncompressed data on disk since some browsers (behind proxies) do not support content compression. How to deal with that? Using mainline kernel, btrfs and btrfs' own transparent file compression. So the 'uncompressed' files are also compressed but at a lower layer and while we now have each and every file twice on disk (SD card in fact) we only need 50 GB storage capacity for 100 GB original contents based on an 1:3 compression ratio. The increase in sequential read performance is still twice as fast since decompression happens on the fly. Not directly related to the filesystem but by tweaking network settings for low latency and many concurrent connections we might be able to improve requests per seconds when many clients access in parallel also by factor 2 compared to an old smelly Android 3.x kernel we still have to use on many SBC (relationship with storage: If we do tune network settings this way we need storage with high IOPS even more) An Orange Pi One together with an EVO 64GB costs a fraction of NanoPi M3 + USB HDD, consumes nearly nothing while being magnitudes faster for the 'static files web server' use case if set up correctly. While the usual moronic benchmarks testing CPU horsepower, GbE throughput and sequential speeds would show exactly the opposite.
And you get this reduction in costs and this increase in performance just by stopping to believe in all these 'benchmarking gone wrong' numbers spread everywhere and switching to active benchmarking: testing the stuff that really matters, checking how that correlates with reality (your use case and the average workload) and then setting up things the right way.
Final note: Of course an Orange Pi One is not the perfect web server due to low amount of DRAM. The best way to overcome slow storage is to avoid access to it. As soon as files are in Linux' filesystem cache the speed of the storage implementation doesn't matter any more.
So having our web server use case in mind: If we do further active benchmarking and identify a set of files that are accessed most frequently we could add another Orange Pi One and a Pine64+ with 2GB. The new OPi One acts as load balancer and SSL accelerator for the second OPi One, the Pine64+ does SSL encryption on his own and holds the most frequently accessed 1.7 GB in RAM ('grep -r foobar /var/www' at startup in the background -- please keep in mind that it's still +5 GB in reality if we're talking about a 1:3 compression ratio. Simply by switching on our brain we get 5GB contents cached in memory on a device that features only 2 GB physical RAM!). And the best: both new boards do not even need local storage since they can be FEL booted from our first OPi One.
manuti reacted to tkaiser in Mainline kernel and dvfs / throttling / thermal settings
We provided this week experimental Armbian images with mainline kernel for a few H3 boards. On the official download pages there are a few variants with 4.6.7 lurking around and here you find some with 4.7.2: http://kaiser-edv.de/tmp/w8JAAY/
Those for the NEO are also suitable for NanoPi M1 and OPi One/Lite, the one for OPi PC Plus can be used on any of the larger Oranges (no Ethernet on +/+2/+2E -- don't ask please, this is stuff for later) since they all use the same more sophisticated voltage regulator being able to adjust VDD_CPUX in 20mW steps (VDD_CPUX is the voltage the CPU cores are fed with). The procedure is called dynamic voltage frequency scaling and the idea behind is to lower voltage on the CPU cores when they're running at lower clockspeeds and vice versa. Works pretty well with legacy kernel in the meantime but it required a lot of work to come up with optimal settings that are still reliable (undervolting the CPU causes stability problems and data corruption!) while also providing best performance results (lower VDD_CPUX voltage, less heat, later throttling). For details please read through the whole issue here: https://github.com/igorpecovnik/lib/issues/298 So what has changed with mainline kernel now? In Armbian we use megi's kernel branch containing Ethernet and dvfs/THS patches and a few others. What's still missing and what do you get now when trying out these mainline images? No HDMI, no audio, no sophisticated SBC stuff like I2C, SPI and so on (unless you know how to deal with device tree overlays). But USB, Ethernet on all Fast Ethernet equipped devices (no network on GbE models currently!), cpufreq scaling / dvfs, working WiFi and with 4.7 also the opportunity to test out USB OTG and the new schedutil cpufreq scheduler. My 4.7.2 Armbian releases contain also a new armbianmonitor variant that can deal with mainline kernel on H3 boards (different templates needed and a different method to get VDD_CPUX values -- fex file vs. device tree) and can install cpuminer. Why cpuminer? To test the efficiency of throttling settings -- see below. As usual RPi-Monitor can be installed using 'sudo armbianmonitor -r' and now cpuminer will be installed by 'sudo armbianmonitor -p' (p for performance measurements). To let cpuminer run in fully automated mode do a 'touch /root/.cpuminer', then minerd in benchmark mode will immediately start after booting and results will be collected by RPi-Monitor (not on the status page but only on the statistics page -- actual values aren't interesting only behaviour over time!) Dvfs settings for Orange Pi PC and the other SY8106A equipped H3 devices already look good and work quite well (although I would tweak them here and there) while those settings for the H3 devices with the more primitive voltage regulation do not. I used a NanoPi NEO for the test but the results apply to all H3 boards that use only 2 different VDD_CPU voltages: NanoPi M1/NEO/NEO-Air and OPi One/Lite. Unlike the Oranges NanoPI M1 and NEO overheat more easily, the latter especially (maybe due to smaller PCB size, single bank DRAM configuration and LDO regulators nearby the SoC?). And tested on the only remaining NEO that does not wear a heatsink. In the beginning I allowed 1200 MHz max cpufreq but since I neither used heatsink nor fan throttling had to jump in to prevent overheating. In this mode H3 started running cpuminer at 1200 MHz, clocked down to 1008 MHz pretty fast and from then on always switched between 624 MHz (1.1V VDD_CPU) and 1008 MHz (1.3V VDD_CPU). The possible 816 MHz (1.1V) in between were never used. Average consumption in this mode was 2550 mW and average cpuminer score 1200 khash/s: I then limited max cpufreq to 816 MHz through sysfs and let the test continue. In the beginning H3 switched between 624 and 816 MHz but since SoC temperature further decreased H3 stayed then all the time at 816 MHz and below 75Â°C (the highest allowed cpufreq at the lower VDD_CPU core voltage with megi's settings!). Average consumption in this mode was 2420 mW and average cpuminer score 1350 khash/s. This is how cpufreq and temperatures correlated over time:
So we got an increase in performance from 1200 to 1350 khash/s (+12.5%) while being able to lower consumption by 130 mW (2550 mW vs. 2420 mW) so not only performance increased but also performance per watt ratio if we manually adjust maximum cpufreq and forbid everything above 816 MHz. Quite the opposite of what one would expect At least it should be obvious that dvfs settings for the small H3 devices need some attention. I monitored consumption through AXP209 on a Banana Pro feeding the H3 device through its USB port. The high voltage fluctuations due to the NEO's voltage regulator constantly switching between 1.1V and 1.3V can be seen until 8:12, then the '30 minutes average value' stabilized at 8:42 and the real consumption difference could be read: 130 mW: With legacy kernel we defined a lot more possible cpufreq/dvfs operating points so let's give it a try: Same hardware setup (same NEO, same USB-to-Micro-USB cable from Banana Pro to NEO, same upright position for nearly identical thermal behaviour), same DRAM clockspeed (408 MHz) but different DVFS/THS settings of course: If we compare mainline kernel with max cpufreq limited to 816 MHz and legacy kernel we get 4.7.2: 1350 khash/s, ~74Â°C, constant 816 MHz cpufreq, 2420 mW reported consumption 3.4.112: 1150 khash/s, ~80Â°C, 648-720 MHz cpufreq, 2610 mW reported consumption Looking at numbers 1 to 3 it simply feels weird: lower clockspeed, lower performance but higher temperatures. Would be an indication for different thermal readouts between mainline and legacy kernel (we had the same issue half a year ago when switching from BSP u-boot to mainline: temperature readouts 10-15Â°C lower). But fortunately I also monitored consumption and there it's 200 mW more. On the same hardware with the same hardware setup. So there is really something happening on the NEO that wastes more energy when running minderd with legacy kernel and that might be responsible for higher temperatures and more aggressive throttling leading to lower performance (at least that's the only possible reason I can imagine) Since we already know that on the NEO adjusting DRAM clockspeed with legacy kernel makes a huge difference regarding consumption and temperatures (see post #13 here) maybe the whole problem is related to different DRAM config on the NEO (single bank vs. dual bank on all other H3 devices) and something's wrong with mainline kernel here? Don't know but already decided to repeat the test with NanoPi M1 (dual bank DRAM config but also the primitive 1.1/1.3V voltage regulation)