• Content Count

  • Joined

  • Last visited

Reputation Activity

  1. Like
    manuti reacted to tkaiser in The new Banana PI M2 Ultra   
    Why do we waste our time with this board (which will most likely be the same shit show as M2+ was/is -- at least that's the device I lost most time with due to vendor even too stupid to provide correct schematic).
    Why not spending our time on improving things with other boards or Armbian in general (Wi-Fi firmware situation, refactoring to deal better with more and more boards and installation variants, preparing Debian Stretch and so on)?
  2. Like
    manuti reacted to lanefu in Espressobin support development efforts   
    ya ill try to tidy things up and send a PR.

    theres a few weird things like the interface is literally called wan from the device tree perspective rather than eth0... perhaps because of the topaz switch being in the middle of it. i have a few other questions i need to post. Should i start a more topic appropriate thread elsewhere?

    Sent from my SM-G920V using Tapatalk

  3. Like
    manuti reacted to BigChris in [Solved] Shutdown Button for NanoPi Neo Air   
    I think i answer myself
    I am to stupid to do it like i want, with python. But, shell scripts work everywhere an i understand it in most cases. So, here is an solution for my Problem.
    I use PIN 18, which Linux GPIO is 201. I pot a pullup resistor (10kOhm) to PIN 17 - 3,3V.
    After this, i configure the PIN18.
    echo 201 > /sys/class/gpio/export echo "in" > /sys/class/gpio/gpio201/direction Now i can check the value

    cat /sys/class/gpio/gpio201/value The answer should be 1.
    If now the PIN18 is shorted to PIN 20 (GND), the answer is 0.

    A little script does it for me:
    #!/bin/sh BUTTON=201 # shutdown button echo "$BUTTON" > /sys/class/gpio/export echo "in" > /sys/class/gpio/gpio$BUTTON/direction while true ; do   data=`cat /sys/class/gpio/gpio$BUTTON/value`   if [ "$data" -eq "0" ] ; then     shutdown -h now   else     cnt=0 fi   done
  4. Like
    manuti reacted to araczkowski in OPI PC: Possible to directly upgrade from server to desktop version?   
    I'm writing this post from Orange Pi Zero 2+ H3 so it works
    I downloaded the desktop image from "other download options and archive"
    In my opinion the  Ubuntu desktop – legacy kernel can be added to the "Orange Pi Zero 2+ H3" download page
    During the installation I had only one problem with USB OTG - I will describe it in detail in separate thread.
    I'm traing to switch the device in my project from OPiPC+ to PPiZero2+H3(because of the bluetooth and small size) I will share the experiences.
    Thanks again @Igor the Armbian is awesome!

  5. Like
    manuti reacted to BigChris in [Solved] Shutdown Button for NanoPi Neo Air   
    I want to use a shutdown button to make a secure shutdown with the nanopi neo air. But i am completly lost how it works. I don't find out, which GPIO's i can use for a script.  And if i need a puuldown or not.
    Can somebody help me?
  6. Like
    manuti reacted to zador.blood.stained in F2FS revisited   
    ... or not. Again, it's a benchmark. It doesn't necessarily hit the bottleneck in the system it was designed to measure, it doesn't tell you where the bottleneck is that is responsible for the numbers you are getting and it doesn't take all the factors into account.
    Well, results in this thread show some numbers related to the I/O performance, nothing more.
    If you wanted to really compare filesystems and not underlying storage, you would need to test a lot of other things:
    CPU utilization by the FS driver in different scenarios memory utilization by the FS driver in different scenarios storage space overhead for the internal FS data comparison of available FS tweaks that may affect 3 previous points and for the "scenarios" you may wanted to test
    creating a large number of files and directories removing a large number of files and directories renaming or changing metadata for the large number of files and directories appending or removing data in existing files traversing a large directory tree  
    So in the end if I/O performance numbers do not differ very much it doesn't mean that filesystems will show similar performance results in different real world scenarios not related to reading or writing files
  7. Like
    manuti reacted to tkaiser in F2FS revisited   
    From left to right: Good, average and crap. The numbers combined below:
    Good Average Crap F2FS EXT4 F2FS EXT4 F2FS EXT4 iozone 4K random write IOPS 746 764 208 196 38 33 iozone 4K random read IOPS 1616 1568 1806 1842 1454 1461 iozone 1MB sequential write KB/s 20354 21988 7161 10318 8644 9506 iozone 1MB sequential read KB/s 22474 22548 22429 22476 15638 14834 ioping k iops 1.57 1.55 1.42 1.31 1.06 1.19 fio write iops 311 399 128 132 31 28 fio read iops 934 1197 384 395 93 85 OMV installation time in sec 862 791 886 943 3729 1480 If I look at these numbers I immediately stop thinking about filesystems but start to focus on hardware instead. Obviously avoiding crappy cards and choosing good or at least average ones helps a lot with performance.
    Interestingly such a task like OMV installation that does a lot of disk IO is somewhat affected by storage (random) performance but not that much. Installation duration with good and average SD card are more or less the same. Then we should keep in mind when looking at sequential speeds that most if not all SBC currently are limited to ~23MB/s here anyway (ASUS Tinkerboard one of the rare exceptions). So it's useless to get a SD card rated for 90MB/s since it will be bottlenecked by the interface anyway.
    Still random IO performance seems to be the most interesting number to focus on and I would assume those cards showing best performance/price ratio are still Samsung's EVO/EVO+ with 32GB or 64GB (can't test right now since all Samsung I own are busy in installations).
    One final word regarding F2FS 'real-world' performance. This filesystem comes from an Android vendor and maybe situation with Android where apps constantly issue fsync calls is really different compared to ext4. But for our uses cases this doesn't matter that much (especially having in mind that Armbian ships with ext4 defaults that try to minimize writes to SD card -- for some details regarding this you might want to read through article and especially comments here:
  8. Like
    manuti reacted to tkaiser in F2FS revisited   
    Since I've been asked recently why Armbian doesn't ship with F2FS by default I thought let's have a look again what to gain from F2FS (a filesystem specially designed for use with flash media by Samsung few years ago). For a history of F2FS discussion please use the search function:
    Armbian fully supports F2FS partitions for a long time now but we don't provide OS images with F2FS for a couple of reasons:
    F2FS still doesn't support resizing so our default installation variant (ship with a minimized rootfs that gets resized on first boot automagically to the size of SD card / eMMC) wouldn't work F2FS requires a pretty recent kernel and is therefore not an option for default images (since most of downloads use legacy kernel) Unfortunately those installations that would benefit the most from faster random IO (writes) are those using the kernels most outdated (Allwinner A20/H3 who are used by many people as 'desktop linux' where performance heavily depends on fast random writes)  
    To use F2FS with Armbian you need to choose a SoC family that is supported by mainline kernel (or at least 4.4 LTS) and then build an image yourself using these options (choosing $FIXED_IMAGE_SIZE to be less than the capacity of the final installation media!)
    ROOTFS_TYPE=f2fs FIXED_IMAGE_SIZE=n In the past I tested through this many times and all my tests didn't show that great performance improvements would justify another installation variant but... let's have a look again and focus on the claims first. F2FS has been invented to solve certain problems with flash based media and promises better performance and higher longevity. At least measuring the latter is somewhat questionable but performance can be tested. So I decided to pick 3 different SD cards that represent roughly the kind of flash media people out there use:
    Crap: An Intenso 4GB Class 4 SD card Average: A random SanDisk 8GB card Superiour: An expensive SanDisk Extreme Plus with 16GB  
    For the tests I used an OrangePi One with kernel 4.10 and a Debian Jessie variant to be able to use a scripted OpenMediaVault installation as some sort of a real-world benchmark (the script is roughly based on the results of our OMV PoC). The other 3 benchmarks are our usual iozone3 call, then ioping and a mixed workload measured with fio. Test script is this:
    First results with an average SanDisk 8 GB card (debug output with F2FS and with EXT4):
    F2FS EXT4 iozone 4K random write IOPS 208 196 iozone 4K random read IOPS 1806 1842 iozone 1MB sequential write KB/s 7161 10318 iozone 1MB sequential read KB/s 22429 22476 ioping k iops 1.42 1.31 fio write iops 128 132 fio read iops 384 395 OMV installation time in sec 886 943 I consider benchmark numbers that vary by less than 10% as identical and then it's easy: ext4 outperforms F2FS since all results are identical but sequential reads are +40%  faster with ext4. Test results in detail: F2FS and EXT4. I'm really not impressed by any differences -- only two things are interesting: faster sequential reads with ext4 but very low random IO write performance at 16K blocksize (that's something we noticed with a lot of SD cards already, see first post in 'SD card performance' thread).
    At the moment I'm not that impressed by performance gains (but that might change later when the crappy 4GB card has finished) and just want to point out that there are other criteria too for choosing a filesystem for systems that are running with a high potential for bit flips (due to users using crappy chargers, bad DRAM clock settings when not using Armbian and so on). Just to give an idea please read through the PDF link here: (ext4 more than 1,000 times more reliable than F2FS when running the AFL test against)
    BTW: mkfs.f2fs info at image creation time (no idea whether something could be 'tuned' here):
    Info: Debug level = 0 Info: Label = Info: Segments per section = 1 Info: Sections per zone = 1 Info: Trim is enabled Info: sector size = 512 Info: total sectors = 7649280 (3735 MB) Info: zone aligned segment0 blkaddr: 512 Info: format version with "Linux version 4.4.0-72-generic (buildd@lcy01-17) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #93-Ubuntu SMP Fri Mar 31 14:07:41 UTC 2017" Info: Discarding device Info: Discarded 7649280 sectors Info: Overprovision ratio = 3.300% Info: Overprovision segments = 126 (GC reserved = 68) Info: format successful  
  9. Like
    manuti reacted to tkaiser in OPi One acpi not working?
  10. Like
    manuti reacted to martinayotte in OPi One acpi not working?   
    acpid need to be configured to take action, such triggering external scripts for shutdown.
    In the meantime, try it with "evtest /dev/input/event0", I've done that and events are coming from PL3 where the button is connected.
    EDIT : Be aware that you will only be able to shutdown the kernel, but since the OPiOne doesn't have any PMIC, it won't shutdown the power.
  11. Like
    manuti reacted to tkaiser in [preview] Generate OMV images for SBC with Armbian   
    And another update wrt 'active benchmarking': Not collecting numbers without meaning but looking at benchmark numbers with the only aim to try to understand what's happening and to improve these numbers.
    In the meantime I gave OPi Zero with its NAS Expansion board and an el cheapo RTL8153 GbE dongle another try. As reference the numbers from a few days ago: 21.6/26 MB/s (write/read) vs. 33/34 MB/s now. That's a nice improvement of +11 MB/s in write direction and still 8MB/s more in read direction.


    What's different? The hardware is not (well, almost not).
    We're running with mainline kernel now that means way better driver quality compared to the smelly Android 3.4.x kernel the legacy H3 images have to use This commit increased CPU clockspeed from 1GHz with mainline currently to 1.2GHz ( ~1MB/s more NAS throughput in each direction) This commit resolved an IRQ collision problem (network and storage IRQs both processed by cpu0 which caused a CPU bottleneck, this greatly improved write throughput) And the simple trick to use an USB enclosure with JSM567 instead of the JMS578 on the NAS Expansion board is responsible also for a few MB/s more since now we're using UASP (improved read performance)  
    Regarding the latter I also consider this a software tweak since now for whatever reasons JMS578 with mainline kernel makes only use of slower 'Bulk-Only' instead of 'USB Attached SCSI'. I'm confident that this can be resolved if interested users get active and discuss this at linux-usb. Some 'storage only' performance numbers can be found here as comparison.
    Edit: 'lsusb -v' output for both JMS567 and JMS578 with mainline kernel:
  12. Like
    manuti reacted to MathiasRenner in Docker on armbian!   
    Since there is official support for Docker on ARM for quite a while now, this thread is "resolved" to me.
    @Igor I suggest to update the docs here: I opened a PR:
  13. Like
    manuti reacted to malvcr in Orange Pi Zero NAS Expansion Board with SATA & mSATA   
    Just as a reference.
    I put the machine with the NAS in the bottom part and the Pi on top (it seems to be more reasonable this way), and increased clearance on both sides some millimeters.  It reduced temperature around 2 degrees.
    Then, I added a nice aluminum heatsink on top of the CPU ... and it worked with another 3 degrees when the machine was exposed.
    And ... remade my LEGO prototype box, with the SSD in the bottom and clear exposed areas at each side of the OPI ... and let the machine working (really doing nothing ... just the rpimonitor on) for one and a half hours around 2:00 p.m. in a tropical country.

    .... the result.  63 celsius degrees ...  very near the security temperature threshold (70).
    Looking around, I found a bigger (around 4 cm wide) 12V fan ... then I connected it to the SATA electricity connector (it even has the right socket).  As this is under-voltage, it runs slowly and makes almost no noise neither vibration, and the fan area is bigger than the 5V tested before.
    Now, the temperature is 42 celsius degrees ... around 20 degrees less than without the fan.  As can be seen on the old data (before 8:00), this temperature is a little lower than the one with the small JET fan.
    What I see is that the OPI is, by itself, a hot machine.  When you add the NAS extension and put everything inside an enclosed box, this doesn't help very much to keep the assembly cold.
    Another test is to put a much bigger heatsink on the machine, maybe covering all the OPIZ square (this is not a strange solution, the Parallella 16 do that).  But by now, it seems that I really need active cooling.
    Here is the LEGO (and clones) box with the bigger fan.

    I will check the VDD_CPUX to see how it changes the scenario.
  14. Like
    manuti reacted to angyalr in Orange Pi Lite doesn't connect to WiFi networks   
    In this topic 
    The MAC address change after reboot. Need write a permanent MAC address.
    sudo nano /etc/modules-load.d/modules.conf
    before 8189 clear the #
    Ctrl+O, Enter, Ctrl+X
    sudo nano /etc/modprobe.d/8189fs.conf
    options 8189fs rtw_initmac=00:e0:4c:f5:16:d6
    Ctrl+O, Enter, Ctrl+X
    It works for me.
  15. Like
    manuti reacted to Igor in More proper testing - better Armbian experience
    - with some java scripting to make Donate page as a DIV change on download click
    - added graph for support level, which we will define, for example this way:
  16. Like
    manuti reacted to malvcr in Orange Pi Zero NAS Expansion Board with SATA & mSATA   
    Well ... I have a msata connected to the NAS device ... and these are the numbers:
    Now, with a real mSATA.
    root@orangepizero:~# hdparm -Tt /dev/sda1 /dev/sda1: Timing cached reads: 684 MB in 2.00 seconds = 341.73 MB/sec Timing buffered disk reads: 92 MB in 3.02 seconds = 30.47 MB/sec As can be seen .. it is a little slower than the WD Blue hard disk with my custom cable, but so near that can be a comparable speed.
    The disk is a V-NAND SSD 850 EVO 250GB mSATA from SAMSUNG (i.e. a good device).
    Now, the two main reasons this setup is a good one:
    1) No spin-up electricity consumption.  Although not so detailed checking (I don't have better measuring tools), but with a USB power tester, the machine maximum consumption was around 0.80 Amp from start-up, including the hdparm test.
    2) Look at it (picture bellow) ... no SATA cables, extremely compact.
    About my configuration:
    This setup has also a SanDisk Cruzer Fit USB 3.0 8G Disk (the short one on one of the USB ports --- the one is not shared with the mSATA device ---).  This is for the swap and other temporary stuff, so I don't need to degrade the OS SD card by constant rewriting there.  And a 3A power supply.
    And it is important to add some screws on the mSATA storage card.  When you put it, the card remain at 45 degrees from the NAS device plane, so you must pull it down and keep there with "something" ... in this case, two Nylon screws with a nut between the mSATA and the NAS, and a short standoff bellow.
    I still need to add a 5V fan and the case, and my setup is ready to go production (with my particular information system there).
  17. Like
    manuti reacted to Igor in How I boot my Orange Pi PC from usb?   
    That's another story and it's possible. Plugin USB drive and check documentation.
  18. Like
    manuti got a reaction from cfaulkingham in RK3328 Kernel   
    Really impressive work for a "one man army". Much appreciate your effort.
  19. Like
    manuti reacted to tkaiser in Some storage benchmarks on SBCs   
    One final update regarding Roseapple Pi (using Actions Semi S500 just like LeMaker Guitar or the announced Cubieboard 6). Since I booted the board one last time anyway I thought let's give USB3 there also one last try. I connected a Samsung PM851 in an JMS567 enclosure (with own power supply!) to the USB3 port and had a look with most recent 3.10.105 kernel:
    root@roseapple:~# lsusb Bus 002 Device 002: ID 152d:3562 JMicron Technology Corp. / JMicron USA Technology Corp. Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub root@roseapple:~# lsusb -t /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 5000M |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=uas, 5000M /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 480M That looks nice since UAS seems to be useable. Let's give it a try with the 2 iozone calls from Clearfog measurements above:
    iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 random random kB reclen write rewrite read reread read write 102400 4 13525 16451 19141 24275 14287 16492 102400 16 39343 48649 56409 63777 40203 45654 102400 512 68873 75835 89871 102977 98620 94677 102400 1024 115288 111747 170742 176837 172585 104936 102400 16384 117025 105977 195316 196457 196582 117819 iozone -a -g 4000m -s 4000m -i 0 -i 1 -r 4K -r 1024K kB reclen write rewrite read reread 4096000 4 124421 132386 134795 134760 4096000 1024 127135 134943 127559 128026 If you compare with PM851 numbers made with Clearfog above it's obvious that S500 numbers are not that great. And since S500 features only Fast Ethernet at least for NAS use cases sequential transfer speeds are irrelevant anyway. I tried then to use an external VIA812 USB3 hub with integrated RTL8153 Gigabit Ethernet but this only led to error messages, /dev/sda1 disappearing and the board failing to boot afterwards. Fortunately this Roseapple Pi (formerly called Lemon Pi more correctly) has never been sold. There exist just a few dev/review samples that were sent out around the globe.
    Maybe the above numbers help some future Cubieboard 6 owners who got tricked into believing CB6 would have 'real SATA'   Funnily USB-SATA on Cubieboard 6 will be much faster than on older Cubieboards (using A20's 'real SATA' or the horrible GL830 USB-to-SATA bridge on Cubieboard 5) but for most use cases this won't help much since there's only Fast Ethernet on the board. So even when adding a RTL8153 Gigabit Ethernet dongle to one of the 2 USB2 ports 'NAS performance' won't exceed that of Cubieboard 3 (the so called Cubietruck)
  20. Like
    manuti reacted to Igor in Browsers with support HTML 5 in Armbian   
    There are many ways to watch accelerated youtoube videos on stock Armbian legacy install. Just not within a browser.
    youtube-dl, mpv, ...
    You don't need any windows based download tool.

    It's to avoid Android on those boards since it's crappy and unsupported. Luckily we have a well maintained Openelec fork if you seek multimedia OS.
  21. Like
    manuti reacted to Igor in More proper testing - better Armbian experience   
    Another option - as clean as possible.
  22. Like
    manuti reacted to tkaiser in h3disp: change display settings on H3 devices   
    Currently h3disp and h3consumption won't be updated automagically:
    Currently you would have to update it manually:
    sudo su - wget -O - >/usr/local/bin/h3disp A long term solution would be to add both utilities to the board support packages.
  23. Like
    manuti reacted to tkaiser in [preview] Generate OMV images for SBC with Armbian   
    Another take on the 'cheap ARM thingie running OMV' approach:

    So what about taking 3 things for $7 each and combine them to a compact DIY NAS?
    Orange Pi Zero (with 256MB DRAM) NAS Expansion board An external USB3 Gigabit Ethernet dongle The above setup uses the 'Orange Pi Zero Plus 2' that has not even Ethernet (only el cheapo AP6212 Wi-Fi) and I used it just since I'm working on this board currently anyway (still waiting for Xunlong to release an Gigabit Ethernet equipped Zero variant just like NanoPi NEO 2 -- then it get's interesting).
    With the above setup you can add a mSATA SSD (maybe not that smart behind an USB2 connection and that's the reason I used a passive mSATA to SATA converter to connect an external 3.5" HDD, they're as cheap as $1.5). The other alternative is to attach a 2.5" HDD/SDD (maybe using Xunlong's SATA+power cable) but then it's mandatory to power the whole setup through the 4.1/1.7mm barrel plug that can be seen on the right.
    Anyway back to the software side of things: In Armbian's special OMV installation variant we support the above setup so even boards without Ethernet at all can simply be used together with one of the two USB3 Gigabit Ethernet dongles available (there's only ASIX AX88179 or RealTek RTL8153). We activate both drivers in /etc/modules and deactivate those not attached on first boot.
    So you could also run this OMV installation variant on H3 boards lacking Ethernet or with only Fast Ethernet and get some extra speed by attaching such an USB3 GbE dongle on first boot and from then on just if needed (for example you could use an Orange Pi PC this way for hourly incremental networked backups with Fast Ethernet where 'NAS performance' is only needed when it's about large restores or 'desaster recovery' and then you attach the GbE dongle and performance is 2-3 times higher).
    Unfortunately I ran into 2 problems 1 problem when trying out the only reasonable variant: OMV running with mainline kernel. First for whatever reasons it seems Armbian's build system currently is somewhat broken regarding H3 boards (should also affect nightly builds). It seems cpufreq/DVFS doesn't work and currently H3 boards with recent mainline builds are slow as hell. And 2nd problem with the NAS Expansion board is that disks attached to any of the two JMS578 USB-to-SATA bridges don't make use of UAS (needs investigation).
    Since I'm currently limited to USB's Mass Storage protocol anyway I used legacy kernel to create an OMV image. Performance not really stellar but ok-ish for a board lacking Ethernet at all:

    I tried also one of my (older) JMS567 USB-to-SATA bridges (main difference: no TRIM support unlike JMS578 on the NAS Expansion board) and numbers look slightly better (needs also some investigation):

    I won't publish OS images for H3/H5 boards now using legacy kernel since patience is the better variant. When we're able to use mainline kernel on the boards performance will get a slight boost and hopefully we can then also make use of UAS on the NAS Expansion board. Why H3/H5? Since up to 4 independant real USB host ports that do not have to share bandwidth (compare with RPi 3 above -- no way to exceed poor Fast Ethernet performance there with a similar looking setup since RPi's USB receptacles have to share bandwidth)
    Interesting detail: The two USB type A receptacles have higher priority than the 2 JMS578 chips on the board. If an OPi Zero is connected to the NAS board as soon as you connect an USB peripheral to one of the two USB ports the respective JMS578 disappears from the bus (left receptacle interacts with mSATA, the right one with the normal SATA connector).
  24. Like
    manuti reacted to martinayotte in Orange Pi Zero, Python GPIO Library   
    For all my Oranges, I'm using
    Be aware that on PiZero, the STATUS_LED is not on PA15 but on PA17, therefore you will need to tweak mapping.h to change that.
  25. Like
    manuti reacted to tkaiser in [preview] Generate OMV images for SBC with Armbian   
    In the meantime I added a lot of tweaks to Armbian's OMV installation routine and started to create OMV images from scratch for the more interesting boards: (based on most recent OMV 3.0.70 and Armbian 5.27 releases)
    I stay with legacy kernel where appropriate (that's currently Clearfogs, ODROID-C1/C2 and A64) and skip both H3 and H5 boards for now since too much WiP at the moment. Some of the tweaks lead to much better performance numbers even if we can't use 'USB Attached SCSI' now with Pine64 for example:

    Sequential write performance looks way better compared to the above Pine64 numbers in post #2 (made with mainline kernel, UAS but also cpufreq limited to just 864 MHz back then and no IO scheduler tweaks).
    The nice thing with these OMV images now is that they use already the following repos:
    Official Debian Jessie repos (here you get updates and security fixes for normal packages) OMV (OMV and OMV Extras updates) Armbian (kernel/bootloader updates and also potential performance tweaks) In the screenshot it's obvious that read performance dropped down to just 32MB/s in one run (check the orange triangles). Am currently investigating this and if it's fixable with better settings that are applied through Armbian's prepare_board mechanism at startup then the fix will magically find all installations in the future since it will be applied through the usual 'apt upgrade'.