Jump to content

TonyMac32

Moderators
  • Posts

    2399
  • Joined

  • Last visited

Reputation Activity

  1. Like
    TonyMac32 reacted to JMCC in RK3288/RK3328 Legacy Multimedia Framework   
    LOL No, man, I'm not that old!
  2. Like
    TonyMac32 got a reaction from Aeini in 4G USB modem with Armbian   
    Not much of a tutorial, but it is something that often needs some googling and head scratching.  This is how I got my particular modem working, your mileage may vary.
     
    Step 1)  Get a modem that will work.  I got a Huawei E397u LTE/UMTS/GSM modem.  It's Cricket branded, but I'm on Google Fi, it will work with my data SIM.
    Step 2) Plug it in and see that it doesn't work.
    Step 3a) apt update
    Step 3b) apt upgrade
    Step 4) Install usb-modeswitcher
    Step 4, optional part 2) install modem-manager, modem-manager-gui  (for general playing around)
    Step 5)  unplug/replug the USB modem, see that it should magically have a different VID:PID when you type lsusb.  You should also have some ttyUSB's, and you can check out modem details in the modem-manager-gui
    Step 6) set up network connection via the network dropdown at the top right of the desktop.  You will need APN information for your carrier.
     
    I put in modem-manager so I could debug.  It will cause issues if you do certain things with it while connected.
     
    My modem was $15, Ebay has them. https://www.ebay.com/itm/BRAND-NEW-Unlocked-Cricket-Huawei-E397-E397u-53-4G-LTE-Mobile-Broadband-Modem/262898110276
    So does amazon.
     
     
     
  3. Like
    TonyMac32 reacted to Igor in ZFS on Helios64   
    I added zfs zfs-dkms_0.8.5-2~20.04.york0_all to our repository so at least Ubuntu version works OOB (apt install zfs-dkms). I tested installation on Odroid N2+ running 5.9.11

    * packaging mirrors need some time to receive updates so you might get some file-not-found, but it will work tomorrow. Headers must be installed from armbian-config.
     
     
  4. Like
    TonyMac32 reacted to sgjava in Fast MMIO GPIO for H2+, H3, H5, H6 and S905   
    I was able to build the property file for the NanoPi M1 in about 20 minutes, so now basically you can do ~2 MHz GPIO square wave on all Allwinner and S905 CPUs (you listening @TonyMac32?) without any special code or adapters. All other libraries require one off code to support GPIO register access thus they only support limited boards. In my mind this is the holy grail for cross platform GPIO performance. I plan on going through some more diverse boards, but I'm happy with the results so far. Since only one core is utilized you'll average 25% CPU utilization on quad core systems. That may be another goal using two threads to improve performance further. I have currently mapped Nano Pi Duo, Nano Pi M1,  Nano Pi Neo Plus2 and Odroid C2.
     
    Check out Java Periphery for more information.
     
    Another thing to consider is that you can utilize the property files in other languages as well since there's Periphery APIs for C, Python and Lua as well.
     
     
  5. Like
    TonyMac32 got a reaction from Borromini in ZFS on Helios64   
    I did this differently, I simply used zfs-dkms and then set up my pool.  no containers or anything needed.
  6. Like
    TonyMac32 reacted to grek in ZFS on Helios64   
    Hey,
    I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
     
    As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS 
    I wrote few scripts maybe someone of You it can help in someway. 
    Thanks @jbergler and @ShadowDance , I used your idea for complete it.
    The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo 
    for example:
     
    root@helios64:~# zfs --version
    zfs-0.8.4-2~bpo10+1
    zfs-kmod-2.0.0-rc6
    root@helios64:~# uname -a
    Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
    root@helios64:~#
     
    I tested it with kernel 5.9.10 and with today clean install with 5.9.11
     
    First we need to have docker installed ( armbian-config -> software -> softy -> docker )
     
    Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) 
    mkdir zfs-builder
    cd zfs-builder
    vi Dockerfile
     
    FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10  
    Build docker image for building purposes.
     
    docker build --tag zfs-build-ubuntu-bionic:0.1 .  
    Create script for ZFS build
     
    vi build-zfs.sh
     
    #!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""  
    chmod +x build-zfs.sh
    screen -L -Logfile buildlog.txt ./build-zfs-final.sh
     
     
     
     
     
  7. Like
    TonyMac32 reacted to sgjava in 20.11.0-trunk.32_Odroidc2 dies after apt upgrade   
    @IgorArmbian_20.11.0-trunk.41_Odroidc2_focal_current_5.9.10.img did the trick. No hangs or death after apt upgrade and reboot!
  8. Like
    TonyMac32 got a reaction from Werner in 20.11.0-trunk.32_Odroidc2 dies after apt upgrade   
    Checking

    Sent from my Pixel using Tapatalk


  9. Like
    TonyMac32 reacted to atomic77 in Self-contained Tensorflow object detector on Orange pi lite + GC2035   
    I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing.

    Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana
     


    I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way!

    Use a 3.4 kernel with custom GC2035 driver

    Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running:
     
    sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2  
    Install Tensorflow lite runtime
     
    Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough)
     
    wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl  

    Build opencv for python 3.5 bindings

    This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function.
     
    To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O )

     
    cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages  
     
    Build gstreamer plugin for Cedar H264 encoder
     
    This is required to get a working gstreamer pipeline for the video feed:
    git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc  
    Processing images
     
    The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer:
     
    Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was:
     
    src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink)
    This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera:

     
    def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
    Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU.
     
    There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores.
     
    For more details, including a detailed build script, the full source is here:
    https://github.com/atomic77/opilite-object-detect
  10. Like
    TonyMac32 reacted to gprovost in Helios64 Support   
    @TDCroPower Here the small hardware tweak ;-)
     
    WARNING Please understand we are not responsible for any damage you might do to the board !
     

     

  11. Like
    TonyMac32 reacted to UniformBuffer in Change CMA memory allocation size   
    Thanks for the info, like i said i'm not very skilled with kernel things, but thanks to your guide i was able to change cma allocation from 256M to 512M.
    I have also tried to set 1GB with 0x40000000, that should be aligned with 0x400000, but after setting it i got no monitor output (from leds pattern i can say it was working, sysreq magic keys also worked, so the kernel was on).
    Anyway, that's not foundamental, i have also tried to set the cma with the same value of the patch you linked (0x38000000) and it worked, making the cma 896MB.
     
    I have tested some h264 and vp9 videos and the performance it's still choppy (after increasing to 512MB it become a little better).
    I have tried mpv with `mpv --hwdec=yes video.mkv` and ffplay with `ffplay -vcodec h264_v4l2m2m video_h264.mkv` and `ffplay -vcodec vp9_v4l2m2m video_vp9.webm` .
    For now i got the best performance with vp9 format using ffplay.
    A strange thing is that ffplay perform better than mpv, but mpv use ffmpeg like ffmplay, so they should have more or less the same performance.
    Anyway the low performance seems to be related to a VERY single threaded behavior. After increasing cma to 512MB i got some free cma left (40-80MB) and increasing again to 896MB increase the free cma proportionally, so it seems that meson_vdec does not get advantage from memory after ~512MB.
     
    Anyway, even if there are some problems, i'm happy to be able to see progresses with my eyes!
    Thanks again for the help and for your hard work
  12. Like
    TonyMac32 got a reaction from UniformBuffer in Change CMA memory allocation size   
    Hello uniformBuffer,
     
       The CMA memory is set by a device tree entry, so that would be the only way to change it to my knowledge.  The drivers are extremely unoptimized presently for the vdec/venc, and the recommended setting is over 800 MB.  This is an obvious problem for a general-purpose distribution, it renders the La Frite 512 MB boards unbootable, and leaves no real room for any working tasks on the 1 GB ones.  It may be possible to implement this as an overlay, but in the meantime modifying the device tree is the only route:
     
    https://github.com/armbian/build/blob/cc7ab6a6b1d91977bd9e154245307e85f7f76519/patch/kernel/meson64-current/0302-arm64-dts-meson-set-dma-pool-to-896MB.patch
     
    This patch gives you the handle and value to get the required 896 MB CMA pool.
  13. Like
    TonyMac32 got a reaction from UniformBuffer in Change CMA memory allocation size   
    Thankfully the kernel does not need recompiled for this, only the device tree.  See the post below, I b elieve this is still accurate as far as device tree decompile/recompile method, can be done on the device (use the correct device tree for your board )
     
  14. Like
    TonyMac32 got a reaction from UniformBuffer in Change CMA memory allocation size   
    I just did a quick test on Le Potato, increasing CMA to 512 MB was very easy following the above instructions with the /boot/dtb/amlogic/meson-gxl-s905x-libretech-cc.dtb device tree.  The property yo uwant to change is right at the top of the file as a handle under "reserved-memory"
     
    Any value that is 0x400000 aligned should be valid, so you can experiment.
     
     
  15. Like
    TonyMac32 reacted to mboehmer in Odroid C2 on seafloor (part II)   
    We are down... and modules seem to be in good shape. Let's hope the powerup will work as expected.

  16. Like
    TonyMac32 reacted to mboehmer in New board of interest (JLD076)   
    Starts right now... here we are.
  17. Like
    TonyMac32 reacted to mboehmer in Odroid C2 on seafloor (part II)   
    Hi all,
     
    as a small status update on the seafloor business, here are some pictures of the new Odroid C2 based instruments which will be deployed in September/October in the northers Pacific.
    Ten modules with different functionality will be deployed, all based on a standard setup of Odroid C2, TRB3sc FPGA based TDC DAQ system, one PADIWA preamp, and a modded mdedia converter serving as a fully configurable mini switch.
     

     
    One of the modules carries several Hamamatsu mini spectrometers, as well as a camera, to observe bioluminescence.

     
    Another module is targeting at muon tracking with SiPM based readout:

     
    I have some more picture of the more "fancy" PMT based modules, but don't want to flood this forum now with too many pictures.
     
    To all of you: thanks for the support you gave us over the last year, and the discussions on specific topics!
     
    Deployment pictures will follow once the modules are in place on the seafloor, 2600m deep in the Pacific (and operational, hopefully, this time we just have a GbE fiber, no serial port...)
     
    See you, Michael
     
     
  18. Like
    TonyMac32 reacted to AnonymousPi in 64x32 LED MATRIX with Allwinner H3 / Orange PI PC   
    A very hackish port exists of this Raspberry PI RGB LED Matrix Library which works for Allwinner H3 devices. This works by bit-banging 14 GPIO pins simultaneously to achieve about a 10Mhz throughput / 100fps refresh rate.
     
    Refer to here: https://github.com/mrfaptastic/opi-allwinner-h3-rgb-led-matrix
     
    I managed to easily get up and running using an Orange PI. Refer to the README of the github repository.

  19. Like
    TonyMac32 reacted to martinayotte in Le Potato max power draw on GPIO pins?   
    A simple MOSFET, such AO3402, should do the job ...
  20. Like
    TonyMac32 got a reaction from Werner in Le Potato max power draw on GPIO pins?   
    The GPIO pins are very low current, 3-5 mA if I remember correctly. I don't think you'll damage the SoC, but it certainly won't drive a fan if my memory is correct.

    Sent from my Pixel using Tapatalk


  21. Like
    TonyMac32 got a reaction from Myy in panfrost on RK3288 and GPU on 600MHz problems   
    https://github.com/rockchip-linux/kernel/blob/develop-4.4/arch/arm/boot/dts/rk3288cg-opp.dtsi
     
    is the vendor opps for the rk3288-C (Tinker/chromebook version)
     
    If it isn't a "C", then it wouldn't have 600 MHz Mali or faster than 1.6 GHz CPU according to Rockchip (I don't know about the "W")
     
    @Myy  Wow, that patch is garbage, I didn't even notice that.  It's worse than you are summarizing:
     
    "NPLL is necessary for 500 MHz, ancient krusty kernel suxors and doesn't have this OPP, so in case someone hypothetically someday maybe in theory thinks about possibly re-purposing the NPLL like the ancient kernel did, make mainline suck too for everyone."
     
    Yikes.
     
    I don't see any reason to not reintroduce that OPP for Armbian use, as far as I know this purely hypothetical situation has not taken place.
  22. Like
    TonyMac32 reacted to martinayotte in Armbian v20.08 (Caple) Planning Thread   
    2h00 PM GMT make it 10h00AM EDT for me, will be drinking my second coffee cup ...
  23. Like
    TonyMac32 got a reaction from guidol in [Info] FriendlyARM PCM5102A-Hat with NanoPi Neo under mainline 4.x.x and dev 5.x.x   
    @guidol tested on Libre Computer Tritium with a Pimoroni PHAT DAC (PCM5102A), only changes needed:
     
    enable i2s@1c22400   (i2s1)
     
    change overlay to use i2s1 instead of i2s0.
     

     
     
  24. Like
    TonyMac32 reacted to ning in Linux kernel 5.7   
    you can copy patches from: https://github.com/khadas/fenix/tree/master/packages/linux-mainline/patches/5.7
     
  25. Like
    TonyMac32 got a reaction from balbes150 in Single Armbian image for RK + AML + AW (armhf ARMv7)   
    OK.  Boots, gets to desktop, plays the test video, sound via HDMI works, my RTL8821CU wifi was properly recognized. 
     
    Only thing I see is no fan control, and the LED is just blinking at 1 Hz for no known reason.
     
    LibreELEC: same, boots, audio, etc.  [edit]  it did not recognize the wifi adapter.
     
    Now to dig up my Ugoos and stuff this on it. 
     
     
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines