TonyMac32

  • Posts

    2344
  • Joined

  • Last visited

Reputation Activity

  1. Like
    TonyMac32 got a reaction from rock64user in Make forum messages friendlier -- 2021 Edition!   
    The "Invalid" tag is cancer.  That needs to be "Off-topic", "Out of scope", "needs clarification" or something else.
     
    Honestly we are actively rebranding ourselves as a toxic community at a terrifying rate.
  2. Like
    TonyMac32 got a reaction from lanefu in Make forum messages friendlier -- 2021 Edition!   
    The "Invalid" tag is cancer.  That needs to be "Off-topic", "Out of scope", "needs clarification" or something else.
     
    Honestly we are actively rebranding ourselves as a toxic community at a terrifying rate.
  3. Like
    TonyMac32 reacted to TRS-80 in RFC: armbian-build architecture   
    I guess I felt like that great discussion in rpardini's PR was sort of shut down prematurely.  Maybe the forum is a better place where we can take the long view over time and build more consensus about some of these bigger changes.
     
    Here, everyone have a beer and relax, get into right frame of mind.   
  4. Like
    TonyMac32 reacted to tparys in Avnet MicroZed   
    https://drive.google.com/drive/folders/1YWMDbbx0p4dLPO3Mr3921vNay8l-_vHo?usp=sharing
     
    This is a topic to discuss CSC Armbian support on the Avnet MicroZed board, and adding a similarly supported SoC family for the Xilinx Zynq. In terms of "interesting boards", the Zynq is a combination ARM/FPGA SoC that pairs a dual core Cortex A9 @ 667 MHz with a Artix-7 family FPGA fabric. In terms of performance, the A9's are nothing to write home about, but the FPGA fabric opens some rather interesting possibilities for those determined enough to use them.
     
    For the unfamiliar, an FPGA (Field Programmable Gate Array) is a sort of reconfigurable logic. It allows one to have dynamic hardware components instead of static pieces selected by the board manufacturer. Want 100+ GPIO? Sure. A dozen serial ports? Alright. 37 I2C buses? You might need to share some interrupts, but I'm not aware of a specific reason that shouldn't work. Really, you're limited by pin count, FPGA fabric usage, and what you can express in Xilinx Vivado.
     
    Be aware, if editing Device Trees puts you off, this is not the board for you. You will need to do substantially more to get anything significant working. If you do not have a copy of Vivado installed, you will not get far. Interested users should start with Xilinx/Avnet documentation or HERE.
     
    Current development status:
    MicroZed (7020) booting up, appears to be working fine CPU Frequency Scaling (333 or 667 MHz ) FPGA bitstream loading (See below gotchas) Onboard Gigabit Ethernet USB (See below gotchas) GPIO, including board heartbeat LED Things that aren't working:
    DTB Overlays (seems to freeze board when enabled in u-boot config) PMOD connector (Didn't enable it in the Vivado, Design, FSBL, DTB, etc..) Current gotchas:
    Zynq u-boot doesn't work like other boards. u-boot and the Zynq FSBL (First Stage Bootloader) get baked together into a file called "boot.bin" via a Xilinx program called "bootgen". This is not currently in the image, so are not able to modify this file on-board. The MicroZed 7010 and 7020 are different boards. The CPU is compatible, but the 7020 has a larger FPGA and access to more pins (eg - Bank 13 on the MicroZed schematics) One of the jobs of the FSBL is to configure the Zynq MIO (Multipurpose I/O). To a limited extent, this re-configures which CPU pins go where on the PMOD connector, sets pull-up or pull-down resistors, and does other things. I think it's possible to skip the FSBL entirely and go directly to u-boot, but am unsure how that's supposed to work in a general sense. The Avnet preset for the MicroZed board specifies the correct reset pin for the USB PHY, but Vivado is not currently using/respecting it. If you make a new design, and do not check that the USB0 reset is enabled on MIO7, and build your own FSBL, the onboard USB port will not be usable.  
     
  5. Like
    TonyMac32 got a reaction from Technicavolous in Is "Asus Tinker Board 2" supported by Armbian?   
    I have a Tinkerboard 2, the only thing causing any issue is the use of a variant of the standard buck converters to power the big cores and the GPU.  I have not gotten this converter to operate properly using the (really ugly and hackish) mainline Linux driver.  Since the existing Mainline Linux driver is pretty much crap, it makes it very difficult to add the variant.
     
    @JMCC a differentiating factor for the Tinker 2 includes it's power input/management, which is extremely robust (it can power all USB ports per specification, unlike any other SBC I am aware of save the Tinker Edge R) and its use of the newer revision RK3399 which supports 2.0 GHz with no overclock and uses less power overall.  The nanoPC T4 would be my only recommendation for an alternate that is currently supported due to its feature set and design.  Ah, it also has a socket for standard PCIe wifi modules, which makes it possible to upgrade.
     
    I will be revisiting the driver code to figure out why I haven't been able to make it happy, the variant is not incredibly different from the normal device.
  6. Like
    TonyMac32 got a reaction from mar0ni in Use GPIO on C2 with Mainline Kernel   
    Hmm, The odroid-N1 is an RK3399 board, maybe that's what's killing the gpio library.  Or the header may be irrelevant, I honestly don't know. 
     
    The GPIO on the Amlogic devices is quite a bit different.  I don't vary from the stock device tree most of the time, so other than the spidev I added it will reflect (assuming I read the schematic correctly, and it in turn was correct)
     

     
    I've been meaning to do it for some time, but I'm just starting working on documenting some of the GPIO's and their various functions.  So, in the device tree if you enable UART A, you'd have the port you want.  Unfortunately my wizardry does not include device tree overlays yet, however this may be something that's ok out of the box if the vendor image has it configured.  (clarification:  I can make a static change to the device tree, however for these sorts of things dynamic changes via overlays is better)
     
    It's also good to see @adafruit doesn't sleep either.  However, I will be going, back to paid work in 6 hours.
  7. Like
    TonyMac32 got a reaction from XFer012 in Is "Asus Tinker Board 2" supported by Armbian?   
    I have a Tinkerboard 2, the only thing causing any issue is the use of a variant of the standard buck converters to power the big cores and the GPU.  I have not gotten this converter to operate properly using the (really ugly and hackish) mainline Linux driver.  Since the existing Mainline Linux driver is pretty much crap, it makes it very difficult to add the variant.
     
    @JMCC a differentiating factor for the Tinker 2 includes it's power input/management, which is extremely robust (it can power all USB ports per specification, unlike any other SBC I am aware of save the Tinker Edge R) and its use of the newer revision RK3399 which supports 2.0 GHz with no overclock and uses less power overall.  The nanoPC T4 would be my only recommendation for an alternate that is currently supported due to its feature set and design.  Ah, it also has a socket for standard PCIe wifi modules, which makes it possible to upgrade.
     
    I will be revisiting the driver code to figure out why I haven't been able to make it happy, the variant is not incredibly different from the normal device.
  8. Like
    TonyMac32 got a reaction from TRS-80 in Support of Raspberry Pi   
    Maybe, maybe not. If their CompaTability nonsense can simply be ignored, and the ARM cores are in control, then yes. If VC6 is running the show, it is the same. :-/

    Sent from my Pixel using Tapatalk


  9. Like
    TonyMac32 got a reaction from TRS-80 in Support of Raspberry Pi   
    An RPi is not

    1) reliable
    2) the most cost-effective
    3) worth $35
    4) worth any more discussion.

    The position of this project stands, we will not support a failure prone, insecure, underperforming, inefficient, abysmally bandwidth throttled device. If an RPi 4 comes out that uses a sane bootloader and a useful SoC then this can be revisited.

    Do not continue your personal argument with Tido; it is not value-added, and your positions add nothing other than conflict. Mostly because you have no facts or reason for your position, and instead of trying to formulate something approaching a case for support resort to ad hominem attacks and downright inaccuracies. This is an unofficial warning to stop harassing the team because you aren't getting your way. The next will be official.

    Sent from my Pixel using Tapatalk

  10. Like
    TonyMac32 reacted to axzxc1236 in RK3288/RK3328 Legacy Multimedia Framework   
    I am very impressed by how much feature we are getting... WOWIE
    I don't know if anyone has said this to you but you are a legend!
  11. Like
    TonyMac32 reacted to JMCC in RK3288/RK3328 Legacy Multimedia Framework   
    LOL No, man, I'm not that old!
  12. Like
    TonyMac32 got a reaction from Aeini in 4G USB modem with Armbian   
    Not much of a tutorial, but it is something that often needs some googling and head scratching.  This is how I got my particular modem working, your mileage may vary.
     
    Step 1)  Get a modem that will work.  I got a Huawei E397u LTE/UMTS/GSM modem.  It's Cricket branded, but I'm on Google Fi, it will work with my data SIM.
    Step 2) Plug it in and see that it doesn't work.
    Step 3a) apt update
    Step 3b) apt upgrade
    Step 4) Install usb-modeswitcher
    Step 4, optional part 2) install modem-manager, modem-manager-gui  (for general playing around)
    Step 5)  unplug/replug the USB modem, see that it should magically have a different VID:PID when you type lsusb.  You should also have some ttyUSB's, and you can check out modem details in the modem-manager-gui
    Step 6) set up network connection via the network dropdown at the top right of the desktop.  You will need APN information for your carrier.
     
    I put in modem-manager so I could debug.  It will cause issues if you do certain things with it while connected.
     
    My modem was $15, Ebay has them. https://www.ebay.com/itm/BRAND-NEW-Unlocked-Cricket-Huawei-E397-E397u-53-4G-LTE-Mobile-Broadband-Modem/262898110276
    So does amazon.
     
     
     
  13. Like
    TonyMac32 reacted to Igor in ZFS on Helios64   
    I added zfs zfs-dkms_0.8.5-2~20.04.york0_all to our repository so at least Ubuntu version works OOB (apt install zfs-dkms). I tested installation on Odroid N2+ running 5.9.11

    * packaging mirrors need some time to receive updates so you might get some file-not-found, but it will work tomorrow. Headers must be installed from armbian-config.
     
     
  14. Like
    TonyMac32 reacted to sgjava in Fast MMIO GPIO for H2+, H3, H5, H6 and S905   
    I was able to build the property file for the NanoPi M1 in about 20 minutes, so now basically you can do ~2 MHz GPIO square wave on all Allwinner and S905 CPUs (you listening @TonyMac32?) without any special code or adapters. All other libraries require one off code to support GPIO register access thus they only support limited boards. In my mind this is the holy grail for cross platform GPIO performance. I plan on going through some more diverse boards, but I'm happy with the results so far. Since only one core is utilized you'll average 25% CPU utilization on quad core systems. That may be another goal using two threads to improve performance further. I have currently mapped Nano Pi Duo, Nano Pi M1,  Nano Pi Neo Plus2 and Odroid C2.
     
    Check out Java Periphery for more information.
     
    Another thing to consider is that you can utilize the property files in other languages as well since there's Periphery APIs for C, Python and Lua as well.
     
     
  15. Like
    TonyMac32 got a reaction from Borromini in ZFS on Helios64   
    I did this differently, I simply used zfs-dkms and then set up my pool.  no containers or anything needed.
  16. Like
    TonyMac32 reacted to grek in ZFS on Helios64   
    Hey,
    I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
     
    As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS 
    I wrote few scripts maybe someone of You it can help in someway. 
    Thanks @jbergler and @ShadowDance , I used your idea for complete it.
    The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo 
    for example:
     
    root@helios64:~# zfs --version
    zfs-0.8.4-2~bpo10+1
    zfs-kmod-2.0.0-rc6
    root@helios64:~# uname -a
    Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
    root@helios64:~#
     
    I tested it with kernel 5.9.10 and with today clean install with 5.9.11
     
    First we need to have docker installed ( armbian-config -> software -> softy -> docker )
     
    Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) 
    mkdir zfs-builder
    cd zfs-builder
    vi Dockerfile
     
    FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10  
    Build docker image for building purposes.
     
    docker build --tag zfs-build-ubuntu-bionic:0.1 .  
    Create script for ZFS build
     
    vi build-zfs.sh
     
    #!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""  
    chmod +x build-zfs.sh
    screen -L -Logfile buildlog.txt ./build-zfs-final.sh
     
     
     
     
     
  17. Like
    TonyMac32 reacted to sgjava in 20.11.0-trunk.32_Odroidc2 dies after apt upgrade   
    @IgorArmbian_20.11.0-trunk.41_Odroidc2_focal_current_5.9.10.img did the trick. No hangs or death after apt upgrade and reboot!
  18. Like
    TonyMac32 got a reaction from Werner in 20.11.0-trunk.32_Odroidc2 dies after apt upgrade   
    Checking

    Sent from my Pixel using Tapatalk


  19. Like
    TonyMac32 reacted to atomic77 in Self-contained Tensorflow object detector on Orange pi lite + GC2035   
    I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing.

    Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana
     


    I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way!

    Use a 3.4 kernel with custom GC2035 driver

    Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running:
     
    sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2  
    Install Tensorflow lite runtime
     
    Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough)
     
    wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl  

    Build opencv for python 3.5 bindings

    This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function.
     
    To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O )

     
    cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages  
     
    Build gstreamer plugin for Cedar H264 encoder
     
    This is required to get a working gstreamer pipeline for the video feed:
    git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc  
    Processing images
     
    The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer:
     
    Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was:
     
    src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink)
    This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera:

     
    def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
    Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU.
     
    There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores.
     
    For more details, including a detailed build script, the full source is here:
    https://github.com/atomic77/opilite-object-detect
  20. Like
    TonyMac32 reacted to gprovost in Helios64 Support   
    @TDCroPower Here the small hardware tweak ;-)
     
    WARNING Please understand we are not responsible for any damage you might do to the board !
     

     

  21. Like
    TonyMac32 reacted to UniformBuffer in Change CMA memory allocation size   
    Thanks for the info, like i said i'm not very skilled with kernel things, but thanks to your guide i was able to change cma allocation from 256M to 512M.
    I have also tried to set 1GB with 0x40000000, that should be aligned with 0x400000, but after setting it i got no monitor output (from leds pattern i can say it was working, sysreq magic keys also worked, so the kernel was on).
    Anyway, that's not foundamental, i have also tried to set the cma with the same value of the patch you linked (0x38000000) and it worked, making the cma 896MB.
     
    I have tested some h264 and vp9 videos and the performance it's still choppy (after increasing to 512MB it become a little better).
    I have tried mpv with `mpv --hwdec=yes video.mkv` and ffplay with `ffplay -vcodec h264_v4l2m2m video_h264.mkv` and `ffplay -vcodec vp9_v4l2m2m video_vp9.webm` .
    For now i got the best performance with vp9 format using ffplay.
    A strange thing is that ffplay perform better than mpv, but mpv use ffmpeg like ffmplay, so they should have more or less the same performance.
    Anyway the low performance seems to be related to a VERY single threaded behavior. After increasing cma to 512MB i got some free cma left (40-80MB) and increasing again to 896MB increase the free cma proportionally, so it seems that meson_vdec does not get advantage from memory after ~512MB.
     
    Anyway, even if there are some problems, i'm happy to be able to see progresses with my eyes!
    Thanks again for the help and for your hard work
  22. Like
    TonyMac32 got a reaction from UniformBuffer in Change CMA memory allocation size   
    Hello uniformBuffer,
     
       The CMA memory is set by a device tree entry, so that would be the only way to change it to my knowledge.  The drivers are extremely unoptimized presently for the vdec/venc, and the recommended setting is over 800 MB.  This is an obvious problem for a general-purpose distribution, it renders the La Frite 512 MB boards unbootable, and leaves no real room for any working tasks on the 1 GB ones.  It may be possible to implement this as an overlay, but in the meantime modifying the device tree is the only route:
     
    https://github.com/armbian/build/blob/cc7ab6a6b1d91977bd9e154245307e85f7f76519/patch/kernel/meson64-current/0302-arm64-dts-meson-set-dma-pool-to-896MB.patch
     
    This patch gives you the handle and value to get the required 896 MB CMA pool.
  23. Like
    TonyMac32 got a reaction from UniformBuffer in Change CMA memory allocation size   
    Thankfully the kernel does not need recompiled for this, only the device tree.  See the post below, I b elieve this is still accurate as far as device tree decompile/recompile method, can be done on the device (use the correct device tree for your board )
     
  24. Like
    TonyMac32 got a reaction from UniformBuffer in Change CMA memory allocation size   
    I just did a quick test on Le Potato, increasing CMA to 512 MB was very easy following the above instructions with the /boot/dtb/amlogic/meson-gxl-s905x-libretech-cc.dtb device tree.  The property yo uwant to change is right at the top of the file as a handle under "reserved-memory"
     
    Any value that is 0x400000 aligned should be valid, so you can experiment.
     
     
  25. Like
    TonyMac32 reacted to mboehmer in Odroid C2 on seafloor (part II)   
    We are down... and modules seem to be in good shape. Let's hope the powerup will work as expected.