Jump to content

neomanic

Members
  • Posts

    28
  • Joined

  • Last visited

Reputation Activity

  1. Like
    neomanic reacted to chradev in LIME2-eMMC application in Ground Penetrating Radar system   
    Hi to All,
     
    I am proud to announce that our effort to embed Olimex' Lime2-eMMC board in Application Controller for Ground Penetrating Radar (GPR) systems meets with success to the final phase.
    Our product EGPR (Easy Ground Penetrating Radar) is ready for demo and field tests.
     
    Everybody can take a look on our just finished web site.
    There is link to the real application from Demo page.
     
    We would like to express our gratitude to Armbian project, its whole team and everybody here in for the great effort and help.
     
    Best regards
     
    EGPR team
     


  2. Like
    neomanic reacted to tkaiser in Armbian running on Pine64 (and other A64/H5 devices)   
    For you to get the idea why you won't receive any more comments from me: I'll remain quiet from now on since it's a waste of time.
     
    We're talking here not about the hobbyists point of view but professional use cases. If I want to roll out a few hundred OPi Zero a pre-populated SPI NOR flash with u-boot able to boot from network makes a huge difference regarding TCO!
     
    Is it really that hard to understand why a device specific boot loader that's
     
    a) already there
    can boot from everywhere (SD card, USB stick, USB disk, SATA on A10/A20/R40 devices, network)
    c) can boot device agnostic (universal!) OS images
    d) while assisting the inexperienced
     
    is simply a great idea? As long as the vendor's increase in costs is marginal and doesn't hurt.
  3. Like
    neomanic reacted to chradev in Banana Pi USB OTG   
    Hi to All,
    It is proved to work on Kernel 4.7.6 and can be tested without Kernel build using following procedure:
    cp /boot/dtb/sun7i-a20-olinuxino-lime2-emmc.dtb . dtc -I dtb -O dts -o sun7i-a20-olinuxino-lime2-emmc.dts sun7i-a20-olinuxino-lime2-emmc.dtb nano sun7i-a20-olinuxino-lime2-emmc.dts Change and remove to fit following:
     
     
    where the original staff is:
     
     
    and the difference is:
    root@egpr:~# diff sun7i-a20-olinuxino-lime2-emmc.dts.orig sun7i-a20-olinuxino-lime2-emmc.dts 830c830 < dr_mode = "otg"; --- > dr_mode = "host"; 845,848d844 < pinctrl-names = "default"; < pinctrl-0 = <0x29 0x2a>; < usb0_id_det-gpio = <0x21 0x7 0x4 0x0>; < usb0_vbus_det-gpio = <0x21 0x7 0x5 0x0>; Re-compile device tree, copy it back to /boot/dtb/ and reboot
    dtc -I dts -O dtb -o sun7i-a20-olinuxino-lime2-emmc.dtb sun7i-a20-olinuxino-lime2-emmc.dts cp sun7i-a20-olinuxino-lime2-emmc.dtb /boot/dtb/ reboot USB device can be connected to OTG port via USB OTG cable and registered in the system.
    You will see musb massages at 'dmesg' and corresponding Bus Device in 'lsusb' printout.
     
    The procedure can be applied for other boards using corresponding DT file.
    Some of the staff may differ from Lime2-eMMC one above.
     
    Best regards
    Chris
  4. Like
    neomanic reacted to zador.blood.stained in Using Docker to host Armbian builds   
    Building using Docker should not be much different from standard setup, but we could use more testing, polishing and enhancements for the build process.
     
    This includes
    testing and providing logs for full image build to check if current workarouns for loop devices are working or need adjustments enhancing Dockerfile if there are any options that may make default experience better testing whether Docker implementation on Windows works fine for full image builds
  5. Like
    neomanic reacted to chradev in Armbian customization   
    Hi to All,
    After discussion with the author of the patch for adding of lime2-emmc as a new board to Kernel 4.7 Olliver Schinagl I have tested and found that the problem can be overcome by removing of the staff around mmc2_pwrseq and PC16 using following patch:
    index 5ea4915..7e6b703 100644 --- a/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2-emmc.dts +++ b/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2-emmc.dts @@ -46,22 +46,6 @@ / { model = "Olimex A20-OLinuXino-LIME2-eMMC"; compatible = "olimex,a20-olinuxino-lime2-emmc", "allwinner,sun7i-a20"; - - mmc2_pwrseq: pwrseq { - pinctrl-0 = <&mmc2_pins_nrst>; - pinctrl-names = "default"; - compatible = "mmc-pwrseq-emmc"; - reset-gpios = <&pio 2 16 GPIO_ACTIVE_LOW>; - }; -}; - -&pio { - mmc2_pins_nrst: mmc2@0 { - allwinner,pins = "PC16"; - allwinner,function = "gpio_out"; - allwinner,drive = <SUN4I_PINCTRL_10_MA>; - allwinner,pull = <SUN4I_PINCTRL_NO_PULL>; - }; }; &mmc2 { @@ -71,7 +55,6 @@ vqmmc-supply = <&reg_vcc3v3>; bus-width = <4>; non-removable; - mmc-pwrseq = <&mmc2_pwrseq>; status = "okay"; emmc: emmc@0 { As tested for months (with my patches for older Kernel versions and 'lime2' config) the system works fine without above staff probably because eMMC chip makes internal reset when powered.
     
    If above staff is present the system makes sw reset using pin PC16 but the reset procedure takes too long and it is the reason for the observed delay and errors.

    If the system is booted from eMMC delay caused by the reset sequence makes device busy at the moment when rootfs has to be mounted. That is way the boot process stops.
     
    In my opinion even with observed delay the system has to continue with rootfs mounting when device become ready but this is another issue or maybe limitation.
     
    Unfortunately, Oliver has no time to work on this issue at the moment but maybe somebody else experienced in that field can help.
     
    Best regards
    Chris
     
  6. Like
    neomanic got a reaction from tkaiser in Lime 2 goes to sleep/offline   
    Sorry, I missed this over the weekend when I was answering from home not work and across the forum and pull-request. Not sure if it's still of use to you, but here are the 384 MHz numbers:
     
     
     
  7. Like
    neomanic reacted to tkaiser in Some storage benchmarks on SBCs   
    Since I've seen some really weird disk/IO benchmarks made on SBCs the last days and both a new SBC and a new SSD arrived in the meantime I thought let's give it a try with a slightly better test setup.
      I tested with 4 different SoC/SBC: NanoPi M3 with an octa-core S5P6818 Samsung/Nexell SoC, ODROID-C2 featuring a quad-core Amlogic S905, Orange Pi PC with a quad-core Allwinner H3 and an old Banana Pi Pro with a dual-core A20. The device considered the slowest (dual-core A20 with just 960 MHz clockspeed) is the fastest in reality when it's about disk IO.   Since most if not all storage 'benchmarks' for SBC moronically focus on sequential transfer speeds only and completely forget that random IO is way more important on any SBC (it's not a digital camera or video recorder!) I tested this also. Since it's also somewhat moronically when you want to test the storage implementation of a computer to choose a disk that is limited the main test device is a brand new Samsung SSD 750 EVO 120GB I tested first on a PC whether the SSD is ok and to get a baseline what to expect.   Since NanoPi M3, ODROID-C2 and Orange Pi PC only feature USB 2.0 I tested with 2 different USB enclosures that are known to be USB Attached SCSI (UAS) capable. The nice thing with UAS is that while it's an optional USB feature that came together with USB 3.0 we can use it with more recent sunxi SoCs also when running mainline kernel (A20, H3, A64 -- all only USB 2.0 capable).   When clicking on the link you can also see how different USB enclosures (to be more precise: the included USB-to-SATA bridges used) perform. Keep that in mind when you see 'disk performance' numbers somewhere and people write SBC A would be 2MB/s faster than SBC B -- for the variation in numbers not only the SBC might be responsible but this is for sure also influenced by both the disk used and enclosure / USB-SATA bridge inside! The same applies to the kernel the SBC is running. So never trust in any numbers you find on the internet that are the results of tests at different times, with different disks or different enclosures. The numbers presented are just BS.   The two enclosures I tested with are equipped with JMicron JMS567 and ASMedia ASM1153. With sunxi SBCs running mainline kernel UAS will be used, with other SoCs/SBC or running legacy kernels it will be USB Mass Storage instead. Banana Pi Pro is an exception since its SoC features true SATA (with limited sequential write speeds) which will outperform every USB implementation. And I also used a rather fast SD card and also a normal HDD with this device connected to USB with a non UASP capable disk enclosure to show how badly this affects the important performance factors (again: random IO!)   I used iozone with 3 different runs: 1 MB test size with 1k, 2k and 4k record sizes 100 MB test size with 4k, 16k, 512k, 1024k and 16384k (16 MB) record sizes 4 GB test size with 4k and 1024k record sizes The variation in results is interesting. If 4K results between 1 and 100 MB test size differ you know that your benchmark is not testing disk througput but instead the (pretty small) disk cache. Using 4GB for sequential transfer speeds ensures that the whole amount of data exceeds DRAM size.   The results:   NanoPi M3 @ 1400 MHz / 3.4.39-s5p6818 / jessie / USB Mass Storage:   Sequential transfer speeds with USB: 30MB/s with 1MB record size and just 7.5MB/s at 4K/100MB, lowest random IO numbers of all. All USB ports are behind an USB hub and it's already known that performance on the USB OTG port is higher. Unfortunately my SSD with both enclosures prevented negotiating an USB connection on the OTG port since each time I connected the SSD the following happened: WARN::dwc_otg_hcd_hub_control:2544: Overcurrent change detected )   ODROID-C2 @ 1536 MHz / 3.14.74-odroidc2 / xenial / USB Mass Storage:   Sequential transfer speeds with USB: ~39MB/s with 1MB record size and ~10.5MB/s at 4K/100MB. All USB ports are behind an USB hub and the performance numbers look like there's always some buffering involved (not true disk test but kernel's caches involved partially)   Orange Pi PC @ 1296 MHz / 4.7.2-sun8i / xenial / UAS:   Sequential transfer speeds with USB: ~40MB/s with 1MB record size and ~9MB/s at 4K/100MB, best random IO with very small files. All USB ports are independant (just like on Orange Pi Plus 2E where identical results will be achieved since same SoC and same settings when running Armbian)   Banana Pi Pro @ 960 MHz / 4.6.3-sunxi / xenial / SATA-SSD vs. USB-HDD:   This test setup is totally different since the SSD will be connected through SATA and I use a normal HDD in an UAS incapable disk enclosure to show how huge the performance differences are.   SATA sequential transfer speeds are unbalanced for still unknown reasons: write/read ~40/170MB/s with 1MB record size, 16/44MB/s with 4K/100MB (that's huge compared to all the USB numbers above!). Best random IO numbers (magnitudes faster since no USB-to-SATA bottleneck as with every USB disk is present).   The HDD test shows the worst numbers: Just 29MB/s sequential speeds at 1MB record size and only ~5MB/s with 4K/100MB. Also the huge difference between the tests with 1MB vs. 100MB data size with 4K record size shows clearly that with 1MB test size only the HDD's internal DRAM cache has been tested (no disk involved): this was not a disk test but a disk cache test only.   Lessons to learn? HDDs are slow. Even that slow that they are the bottleneck and invalidate every performance test when you want to test the performance of the host (the SBC in question) With HDDs data size matters since you get different results depending on whether the benchmark runs inside the HDD's internal caches or not. SSDs behave here differently since they do not contain ultra-slow rotating platters but their different types of internal storage (DRAM cache and flash) do not perform that different When you have both USB and SATA not using the latter is almost all the time simply stupid (even if sequential write performance looks identical. Sequential read speeds are way higher, random IO will always be superiour and this is more important) It always depends on the use case in question. Imagine you want to set up a lightweight web server dealing with static contents on any SBC that features only USB. Most of the accessed files are rather small especially when you configure your web server to deliver all content already pre-compressed. So if you compare random reads with 4k and 16k record size and 100MB data size you'll notice that a good SD card will perform magnitudes faster! For small files (4k) it's ~110 IOPS (447 KB/s) vs. 1950 IOPS (7812 KB/s) so SD card is ~18 times faster, for 16k size it's ~110 IOPS (1716 KB/s) vs. 830 IOPS (13329 KB/s) so SD card is still 7.5 times faster than USB disk. File size has to reach 512K to let USB disk perform as good as the SD card! Please note that I used a Samsung Pro 64GB for this test. The cheaper EVO/EVO+ with 32 and 64GB show identical sequential transfer speeds while being a lot faster when it's about random IO with small files. So you save money and get better performance by choosing the cards that look worse by specs! Record size always matters. Most fs accesses on an SBC are not large data that will be streamed but small chunks of randomly read/written data. Therefore check random IO results with small record sizes since this is important and have a look at the comparison of 1MB vs. 100 MB data size to get the idea when you're only testing your disk's caches and when your disk in reality. If you compare random IO numbers from crap SD cards (Kingston, noname, Verbatim, noname, PNY, noname, Intenso, noname and so on) with the results above then even the slow HDD connected through USB can shine. But better SD cards exist and some pretty fast eMMC implementations on some boards (ODROID-C2 being the best performer here). By comparing with the SSD results you get the idea how to improve performance when your workload depends on that (desktop Linux, web server, database server). Even a simple 'apt-get upgrade' when done after months without upgrades heavily depends on fast random IO (especially writes).   So by relying on the usual bullshit benchmarks only showing sequential transfer speeds a HDD (30 MB/s) and a SD card (23 MB/s) seem to perform nearly identical while in reality the way more important random IO performance might differ a lot. And this solely depends on the SD card you bought and not on the SBC you use! For many server use cases when small file accesses happen good SD cards or eMMC will be magnitudes faster than HDDs (again, it's mostly about random IO and not sequential transfer speeds).   I personally used/tested SD cards that show only 37 KB/s when running the 16K random write test (some cheap Intenso crap). Compared to the same test when combining A20 with a SATA SSD this is 'just' over 800 times slower (31000 KB/s). Compared to the best performer we currently know (EVO/EVO+ with 32/64GB) this is still 325 times slower (12000 KB/s). And this speed difference (again: random IO) will be responsible for an 'apt-get upgrade' with 200 packages on the Intenso card taking hours while finishing in less than a minute on the SATA disk and in 2 minutes with the good Samsung cards given your Internet connection is fast enough.
  8. Like
    neomanic reacted to tkaiser in Lime 2 goes to sleep/offline   
    FYI:
     
    root@lime2:~#  dd if=/dev/mmcblk0 bs=48K count=1 | strings | grep -i "U-Boot"
    1+0 records in
    1+0 records out
    49152 bytes (49 kB, 48 KiB) copied, 0.00118619 s, 41.4 MB/s
    U-Boot
    U-Boot SPL 2016.05-armbian (Jul 14 2016 - 17:48:17)
    U-Boot 2016.05-armbian for sunxi
    root@lime2:~# uptime
     08:12:55 up 63 days, 13:43,  1 user,  load average: 0.34, 0.17, 0.11
      The date looks like it's a build I did myself, no idea whether I lowered DRAM clockspeeds or not. I did an 48 hours burn-in test back then and moved it to a hosting location. Up and running as usual. In other words: yes, there's something wrong with current settings Armbian uses. Yes, it's easy to adjust settings if we would be able to get a clue what's going on. Yes, with Olimex own image there seems to be no problem since they use different settings and don't update their images ever again (3.4.103 -- good luck folks!)
     
    Yes, everything will remain the same since no progress can be made.
  9. Like
    neomanic reacted to tkaiser in Lime 2 goes to sleep/offline   
    With mainline u-boot on A20 DRAM clockspeed is only defined there and fex settings are ignored. I've no idea why Armbian still uses the 'default' 480 MHz. We asked one professional user to test through different DRAM clockspeeds, I even prepared different u-boot .debs to test. This was 5 weeks ago and we never heard anything since then: http://forum.armbian.com/index.php/topic/1853-rfc-images-for-new-boards/?p=14275
     
    Still I don't understand why we allow to let any A20 board run with these moronic 480 MHz. But that's obviously just me.
     
    In case you want to give Olimex' u-boot version a try use our build system and set in userpatches/lib.config
    BOOTBRANCH="tag:v2015.10" I will stop now looking into Lime2 at all.
  10. Like
    neomanic reacted to cabsandy in Lime 2 goes to sleep/offline   
    I bought an Odroid C2-its been rock solid since I bought it. No going to sleep, not having to mess about with other images, it.just.works. As do my Pi's, up for 30 days and more.
    So not sure if this is a hardware problem, or a software problem-but its getting the heave into Ebay. I've not got the time, nor the energy, to try and make someone else's experiment work.
     
    cheers
     
    cabs
  11. Like
    neomanic reacted to monti in Freezing problem with Olimex A20 Lime 2   
    The ondemand or interactive governor switches the cpu frequence. Additional to the frequency, the voltage is changed too, which may be problematic.
     
    This file may be new, you have to create it (exactly "/etc/default/cpufrequtils"). It' read in /etc/init.d/cpufrequtils:
    ENABLE="true" GOVERNOR="interactive" MAX_SPEED="1010000" MIN_SPEED="480000" ... if [ -f /etc/default/cpufrequtils ] ; then         . /etc/default/cpufrequtils fi You may also restart only this service-script, probably.
  12. Like
    neomanic got a reaction from EvgeniK45 in DHCP after "no link" boot   
    I had the same issue today. It's not something I have to worry about since my application doesn't care about hotplug, but perhaps this will help you?
     
    http://askubuntu.com/questions/575487/network-eth0-dhcp-static-ip-and-autonegotiation-issues
  13. Like
    neomanic got a reaction from lanefu in [testing] running Armbian tools with Docker style VM   
    Yes, that's right. I think we're fortunate in that the same kernel features required for running Docker overlap with those to do cross-compiling, I was very pleasantly surprised.
     
    I've found a few mistakes in the above Dockerfile. The mknod commands don't seem to do anything, and apt-cacher-ng isn't running for the compile. So it's probably best to just go as far as the apt-get commands, then do the rest in the actual running container, and then commit the result as a new image. Then each time you can just run a build.
     
    I'm now working on folding our application into the build so we can build an image from scratch with just a few commands. But first trying to get this eMMC working on the Lime2 boards... argh!
  14. Like
    neomanic got a reaction from lanefu in [testing] running Armbian tools with Docker style VM   
    I've had some great success with Docker, both to build Armbian for and to deploy our application on a Lime2.
     
    The catch is, I'm doing this on a Mac, using the latest beta of Docker. The key benefits of this are:
    - a single command to build an entire deployable image,
    - can test the actual compiled ARM application directly before deploying to the embedded board.
     
    Much thanks to lanefu for his comments on the loopback device nodes, which was the final piece of the puzzle.
     
    My current Dockerfile, which adds some extra required packages.
    # Docker build environment for Armbian # # Build the image: # docker build -t armbian-build:clean lib/ # # To run a container and perform a build: # docker run -it --privileged armbian-build:clean # ./compile.sh # # Once completed, you will need to commit the build as a new container to save cached build and downloads # OR uncomment the final line of script instead. # docker commit `docker ps -lq` armbian-build:post-build # # Next time run # docker run -it armbian-build:post-build FROM ubuntu:16.04 RUN apt-get update RUN apt-get install -qy git build-essential binutils binfmt-support qemu-user-static nano pkg-config ccache ntpdate udev # Recreate loopback and partition devices RUN rm -rf /dev/loop[0-1]* RUN mknod /dev/loop0 b 7 0 RUN mknod /dev/loop0p1 b 259 0 RUN mknod /dev/loop0p2 b 259 1 RUN mknod /dev/loop0p3 b 259 2 RUN mknod /dev/loop0p4 b 259 3 RUN mknod /dev/loop1 b 7 1 RUN mknod /dev/loop1p1 b 259 0 RUN mknod /dev/loop1p2 b 259 1 RUN mknod /dev/loop1p3 b 259 2 RUN mknod /dev/loop1p4 b 259 3 WORKDIR /root RUN git clone --depth 1 https://github.com/igorpecovnik/lib/ RUN cp lib/compile.sh . # RUN ./compile.sh BOARD=lime2 BRANCH=next RELEASE=xenial KERNEL_ONLY=no PROGRESS_DISPLAY=plain # uncomment this line to run the first build, to populate the docker image with current sources and packages
  15. Like
    neomanic reacted to lanefu in [testing] running Armbian tools with Docker style VM   
    I fought the good fight over the weekend trying to get Armbian builder to build full images while running in a docker container.    I've had some success, and managed to build a armban 5.11 image for my Orange Pi One.. I'm running on it with wifi etc.
     
    I'm using a CentOS7 Docker 1.9.1 host and using Ubuntu 14.04 Trusty Docker Container for the Armbian builder container. (I just used the Dockerfile from the armbian git repo)
     
    Bottom line loopback management inside containers is a bad time.. especially when losetup implementations vary between host and container.   The partprobe step in the container doesn't seem to trigger appropriate feedback to udev to create partitiion devices on the /dev loop back devices.   I got around it by creating them ahead of time.
     
    Here's a few tricks to limp through.  I'll try to add more clarity later.
     
    1.  Don't. just go build a ubuntu 14.04 VM or use a turn key one on amazon and get your life back.
     
     
    From Docker Host
    #you need modules modprobe binfmt_misc modprobe loop #scrub loopbacks for good luck rm -rf /dev/loop[0-9]*     Launch armbian builder container with --privileged=true #sudo docker run --privileged=true --name=armbian -it armbianbuilder #or if you're re-using container #sudo docker start -i armbian From container   #start apt-cacher-ng service apt-cacher-ng start ​ #scrub loopbacks again in container for good luck rm -rf /dev/loop[0-9]* #create all your loop back devices mknod /dev/loop0 b 7 0 mknod /dev/loop0p1 b 259 0 mknod /dev/loop0p2 b 259 1 mknod /dev/loop0p3 b 259 2 mknod /dev/loop0p4 b 259 3 mknod /dev/loop1 b 7 1 mknod /dev/loop1p1 b 259 0 mknod /dev/loop1p2 b 259 1 mknod /dev/loop1p3 b 259 2 mknod /dev/loop1p4 b 259 3 mknod /dev/loop2 b 7 2 mknod /dev/loop2p1 b 259 0 mknod /dev/loop2p2 b 259 1 mknod /dev/loop2p3 b 259 2 mknod /dev/loop2p4 b 259 3 #build your dreams cd ~ ./compile.sh Every time you build an image you should destroy the loopbacks and recreate.  It's your best chances of getting the partitions in the loopback image to behave for mounting.
     
     
         
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines