Jump to content

atomic77

Members
  • Posts

    14
  • Joined

  • Last visited

Reputation Activity

  1. Like
    atomic77 reacted to DoubleHP in OrangePi Zero LTS vs DHT22   
    In case the other website goes down, I will duplicate their original code here. Don't forget to fix your PIN number.
     
    My WiringPi lib is probably https://github.com/zhaolei/WiringOP (not 100% certain).
     
    /* * dht.c: * read temperature and humidity from DHT11 or DHT22 sensor */ #include <wiringPi.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> #define MAX_TIMINGS 85 #define DHT_PIN 3 /* GPIO-22 */ int data[5] = { 0, 0, 0, 0, 0 }; void read_dht_data() { uint8_t laststate = HIGH; uint8_t counter = 0; uint8_t j = 0, i; data[0] = data[1] = data[2] = data[3] = data[4] = 0; /* pull pin down for 18 milliseconds */ pinMode( DHT_PIN, OUTPUT ); digitalWrite( DHT_PIN, LOW ); delay( 18 ); /* prepare to read the pin */ pinMode( DHT_PIN, INPUT ); /* detect change and read data */ for ( i = 0; i < MAX_TIMINGS; i++ ) { counter = 0; while ( digitalRead( DHT_PIN ) == laststate ) { counter++; delayMicroseconds( 1 ); if ( counter == 255 ) { break; } } laststate = digitalRead( DHT_PIN ); if ( counter == 255 ) break; /* ignore first 3 transitions */ if ( (i >= 4) && (i % 2 == 0) ) { /* shove each bit into the storage bytes */ data[j / 8] <<= 1; if ( counter > 16 ) data[j / 8] |= 1; j++; } } /* * check we read 40 bits (8bit x 5 ) + verify checksum in the last byte * print it out if data is good */ if ( (j >= 40) && (data[4] == ( (data[0] + data[1] + data[2] + data[3]) & 0xFF) ) ) { float h = (float)((data[0] << 8) + data[1]) / 10; if ( h > 100 ) { h = data[0]; // for DHT11 } float c = (float)(((data[2] & 0x7F) << 8) + data[3]) / 10; if ( c > 125 ) { c = data[2]; // for DHT11 } if ( data[2] & 0x80 ) { c = -c; } float f = c * 1.8f + 32; printf( "Humidity = %.1f %% Temperature = %.1f *C (%.1f *F)\n", h, c, f ); }else { printf( "Data not good, skip\n" ); } } int main( void ) { printf( "Raspberry Pi DHT11/DHT22 temperature/humidity test\n" ); if ( wiringPiSetup() == -1 ) exit( 1 ); while ( 1 ) { read_dht_data(); delay( 2000 ); /* wait 2 seconds before next read */ } return(0); }  
  2. Like
    atomic77 reacted to rdeyes in OV5640 device tree overlay for OrangePi One H3   
    Hello, @rreignier! Thank you very much for your advise! It worked!
     
    I've completed your overlay with @gsumner's regulator nodes (I've inserted my board pins), so it loads on OrangePi-PC:
    &{/} { reg_vdd_1v5_csi: vdd-1v5-csi { compatible = "regulator-fixed"; regulator-name = "vdd1v5-csi"; regulator-min-microvolt = <1500000>; regulator-max-microvolt = <1500000>; gpio = <&pio 6 13 0>; /* PG13 */ enable-active-high; regulator-boot-on; regulator-always-on; }; reg_vcc_csi: vcc-csi { compatible = "regulator-fixed"; regulator-name = "vcc-csi"; regulator-min-microvolt = <2800000>; regulator-max-microvolt = <2800000>; gpio = <&pio 6 11 0>; /* PG11 */ enable-active-high; regulator-boot-on; regulator-always-on; }; reg_vcc_af_csi: vcc-af-csi { compatible = "regulator-fixed"; regulator-name = "vcc-af-csi"; regulator-min-microvolt = <2800000>; regulator-max-microvolt = <2800000>; gpio = <&pio 0 17 0>; /* PA17 */ enable-active-high; regulator-boot-on; regulator-always-on; }; };  
    And the final piece, that was missing is the forked version of fswebcam, you've mentioned today. (The regular version from apt repo gave me "black square".)
     
    So, despite my C++ app with ioctls may still not working (maybe I shall try rolling back to kernel 5.8 to make it), I am at least now capable of taking pictures with scripts! (And I'll definetely try OpenCV later, I have not yet installed it.)
     
    So, big, BIG THANK YOU!!!
     
    P.S. I've noticed one small weird thing:
    The overlay I've posted in my previous comment now fails with 'ov5640_check_chip_id: failed to read chip identifier', as if it failed to power the camera on. And, I swear, it did not before trying your solution. (I never used both overlays simultaniously, so, I think, something else gets modified... maybe, by the forked fswebcam)
  3. Like
    atomic77 reacted to Igor in Orange Pi R1 Plus (Orange Pi R1+) support?   
    I think we have to get rid of our Nanopi R2S patch since I would assume mainline version is just fine and cleaned out. Then adjusting R1 related commits accordingly. Also one quick look into u-boot if upstream version is good enough to boot the board properly and remove patches there too.

    And it should work.
     
    https://armbian.atlassian.net/browse/AR-573
  4. Like
    atomic77 reacted to Hammy in Orange pi zero kernel oops with zram enabled   
    I guess this is related to:
     
     
  5. Like
    atomic77 reacted to BarnA in zram vs swap   
    Thanks for the really useful discussion here about zram.  I just wanted to add the symptoms I observed with Armbian (Stretch, no GUI) on a 512Mb Orange Pi One which I'm pretty sure (but cannot prove) was due to out of memory conditions, in case this report helps others with the same symptoms.
     
    Normal service with Opi1/Armbian for me is to get indefinite uptime (>>6 months).  I was experiencing unexpected crashes after 1-2 weeks and as I run a UPS and am using an M2.SATA SSD to USB adapter its unlikely that powering or sdcard issues are a root cause.
     
    When observing the crashes roughly 80% seemed to occur 1-2minutes after the apt.daily timer fired (as reported by 'systemctl list-timers'), the remainder seemed to occur at random.  As I've got an SSD I decided to add 512Mb swap on top of the zram automatically configured.  I set the prioriy of the SSD swap below that of zram and vm.swappiness = 60 (suggestions as to correct setting in these circumstances much appreciated though, I haven't experimented).
     
    The system has now been up for approaching 1 month without further outage.  swapon -s shows that just under 20Mb of swap on the SSD is in use.  zram is used heavily.
    Filename                                Type               Size       Used       Priority
    /swapfile1                              file                 524284  18308    2
    /dev/zram1                             partition        252000  198692  5
     
    I think that for users with SSD like me it may be useful to supplement zram with normal swap in the case where they have a device with a small amount of memory such as an Opi1.  Comparing Armbian Buster to Stretch memory footprint for my use case, I wonder if this may become more important going forward?
     
  6. Like
    atomic77 got a reaction from TonyMac32 in Self-contained Tensorflow object detector on Orange pi lite + GC2035   
    I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing.

    Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana
     


    I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way!

    Use a 3.4 kernel with custom GC2035 driver

    Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running:
     
    sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2  
    Install Tensorflow lite runtime
     
    Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough)
     
    wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl  

    Build opencv for python 3.5 bindings

    This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function.
     
    To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O )

     
    cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages  
     
    Build gstreamer plugin for Cedar H264 encoder
     
    This is required to get a working gstreamer pipeline for the video feed:
    git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc  
    Processing images
     
    The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer:
     
    Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was:
     
    src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink)
    This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera:

     
    def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
    Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU.
     
    There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores.
     
    For more details, including a detailed build script, the full source is here:
    https://github.com/atomic77/opilite-object-detect
  7. Like
    atomic77 got a reaction from NicoD in Self-contained Tensorflow object detector on Orange pi lite + GC2035   
    I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing.

    Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana
     


    I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way!

    Use a 3.4 kernel with custom GC2035 driver

    Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running:
     
    sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2  
    Install Tensorflow lite runtime
     
    Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough)
     
    wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl  

    Build opencv for python 3.5 bindings

    This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function.
     
    To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O )

     
    cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages  
     
    Build gstreamer plugin for Cedar H264 encoder
     
    This is required to get a working gstreamer pipeline for the video feed:
    git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc  
    Processing images
     
    The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer:
     
    Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was:
     
    src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink)
    This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera:

     
    def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
    Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU.
     
    There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores.
     
    For more details, including a detailed build script, the full source is here:
    https://github.com/atomic77/opilite-object-detect
  8. Like
    atomic77 got a reaction from lanefu in Self-contained Tensorflow object detector on Orange pi lite + GC2035   
    I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing.

    Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana
     


    I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way!

    Use a 3.4 kernel with custom GC2035 driver

    Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running:
     
    sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2  
    Install Tensorflow lite runtime
     
    Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough)
     
    wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl  

    Build opencv for python 3.5 bindings

    This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function.
     
    To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O )

     
    cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages  
     
    Build gstreamer plugin for Cedar H264 encoder
     
    This is required to get a working gstreamer pipeline for the video feed:
    git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc  
    Processing images
     
    The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer:
     
    Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was:
     
    src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink)
    This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera:

     
    def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
    Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU.
     
    There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores.
     
    For more details, including a detailed build script, the full source is here:
    https://github.com/atomic77/opilite-object-detect
  9. Like
    atomic77 got a reaction from TRS-80 in Self-contained Tensorflow object detector on Orange pi lite + GC2035   
    I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing.

    Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana
     


    I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way!

    Use a 3.4 kernel with custom GC2035 driver

    Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running:
     
    sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2  
    Install Tensorflow lite runtime
     
    Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough)
     
    wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl  

    Build opencv for python 3.5 bindings

    This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function.
     
    To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O )

     
    cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages  
     
    Build gstreamer plugin for Cedar H264 encoder
     
    This is required to get a working gstreamer pipeline for the video feed:
    git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc  
    Processing images
     
    The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer:
     
    Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was:
     
    src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink)
    This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera:

     
    def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
    Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU.
     
    There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores.
     
    For more details, including a detailed build script, the full source is here:
    https://github.com/atomic77/opilite-object-detect
  10. Like
    atomic77 reacted to bricriu in Cloud-Init   
    In case anyone else is looking to do this, I was able to compile an ubuntu focal server image with cloud-init with the following (after checking https://docs.armbian.com/Developer-Guide_User-Configurations/):
     
    Modify userpatches/lib.config to install the cloud-init package:
    PACKAGE_LIST_ADDITIONAL="$PACKAGE_LIST_ADDITIONAL cloud-init"  
    I then added my cloud-init files (user-data and meta-data) to the userpatches/overlay/ directory (I put them in a "cloud-init" folder), and added this to userpatches/customize-image.sh:
    cp -r /tmp/overlay/cloud-init /boot/cloud-init echo "extraargs=ds=nocloud;s=/boot/cloud-init/" >> /boot/armbianEnv.txt  
    Probably some extra customization is warranted (I needed to make some other unrelated changes), but this should be enough to enable cloud-init, and you can modify the arguments passed to /boot/armbianEnv.txt to meet your needs if you need a different datasource/whatever.
  11. Like
    atomic77 reacted to BennySt in Openmediavault 3.x customize-image.sh   
    Hi,
     
    I'am the new one. I'am German and bought last month an OrangePi PC+ from AliExpress without a clue what todo with it, but it was cheap ;-) And it was perfect for my new OMV.
     
    Just installing OMV on top of Armbian / Jessie was not enough because of missing Quota Kernel Options. When setting up a development Enviroment, I found the excellent Build-Scripts of Armbian and thought, why not making an OMV-Image with the customize-image.sh.
     
    And here is it (quick and dirty v0.1) Just copy it with a modified kernel.config under userpatches.
    It should work with every supported Armbian Board as long as its working with Debian Jessie, because it installes OMV3.X
    I compiled my Image with the legacy Kernel but it should also work with mainline.
     
    If wanted I can build some Images for testing.
    Please give some feedback or critics, I hope its helpfull for someone.
     
    Sorry for attaching this as Code but I can't upload attachments?


    #!/bin/bash

    #################################################################################################################################
    ## customize-image.sh - for installing openmediavault 3
    ##
    ## installs omv, omv-extras, omv-flashmemory
    ## and making some changes to the System
    ## started in chroot from /tmp
    ##
    ## arguments: $RELEASE $FAMILY $BOARD $BUILD_DESKTOP
    ##
    ## Author : Benny St <Benny_Stark@live.de>
    ## Version : 0.1
    ##
    ## Version 0.1 - first Release
    ##
    #################################################################################################################################
    RELEASE=$1
    FAMILY=$2
    BOARD=$3
    BUILD_DESKTOP=$4

    #Modified display alert from lib/general.sh
    display_alert()
    #--------------------------------------------------------------------------------------------------------------------------------
    # Let's have unique way of displaying alerts
    #--------------------------------------------------------------------------------------------------------------------------------
    {
    # log function parameters to install.log
    #[[ -n $DEST ]] && echo "Displaying message: $@" >> $DEST/debug/output.log

    local tmp=""
    [[ -n $2 ]] && tmp="[\e[0;33m $2 \x1B[0m]"

    case $3 in
    err)
    echo -e "[\e[0;31m error \x1B[0m] $1 $tmp"
    ;;

    wrn)
    echo -e "[\e[0;35m warn \x1B[0m] $1 $tmp"
    ;;

    ext)
    echo -e "[\e[0;32m o.k. \x1B[0m] \e[1;32m$1\x1B[0m $tmp"
    ;;

    info)
    echo -e "[\e[0;32m o.k. \x1B[0m] $1 $tmp"
    ;;

    *)
    echo -e "[\e[0;32m .... \x1B[0m] $1 $tmp"
    ;;
    esac
    }


    case $RELEASE in
    wheezy)
    # your code here
    ;;
    jessie)
    #change root passs first
    #else we get some configure-errors
    display_alert "Change Root PW" "custom-image.sh" "info"
    echo root:openmediavault|chpasswd

    display_alert "Change /etc/hostname" "custom-image.sh" "info"
    echo "openmediavault" > /etc/hostname ## works after reboot

    #generate locales
    #they are not there
    display_alert "Generate Locals and set them" "custom-image.sh" "info"
    locale-gen "en_US.UTF-8"
    locale-gen "C"
    export LANG=C
    export LC_ALL="en_US.UTF-8"

    #Unattended apt-get
    export DEBIAN_FRONTEND=noninteractive

    #Add OMV source.list and Update System
    display_alert "Adding OMV-Repo erasmus and update" "custom-image.sh" "info"
    cat > /etc/apt/sources.list.d/openmediavault.list << EOF
    deb http://packages.openmediavault.org/public erasmus main
    ## Uncomment the following line to add software from the proposed repository.
    # deb http://packages.openmediavault.org/public erasmus-proposed main

    ## This software is not part of OpenMediaVault, but is offered by third-party
    ## developers as a service to OpenMediaVault users.
    # deb http://packages.openmediavault.org/public erasmus partner
    EOF
    apt-get update

    # OMV Key
    display_alert "Install OMV Keys" "custom-image.sh" "info"
    #wget -O - packages.openmediavault.org/public/archive.key | apt-key add -
    apt-get --yes --force-yes --allow-unauthenticated install openmediavault-keyring
    # OMV Plugin developer Key
    apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 7AA630A1EDEE7D73


    #install debconf-utils for postfix configuration
    display_alert "Install debconf-utils" "custom-image.sh" "info"
    apt-get --yes --force-yes --allow-unauthenticated --fix-missing --no-install-recommends -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install debconf-utils

    #install postfix
    #Postfix configuration
    display_alert "Install postfix and configure it - No configuration" "custom-image.sh" "info"
    debconf-set-selections <<< "postfix postfix/main_mailer_type select No configuration"
    apt-get --yes --force-yes --allow-unauthenticated --fix-missing --no-install-recommends -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install postfix

    #install OMV
    #--allow-unauthenticated because openmediavault-keyring doesn't contain all keys???
    display_alert "Install OMV" "custom-image.sh" "info"
    apt-get --yes --force-yes --allow-unauthenticated --fix-missing --no-install-recommends -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install openmediavault

    #install OMV extras
    display_alert "Download and install OMV Keys" "custom-image.sh" "info"
    wget http://omv-extras.org/openmediavault-omvextrasorg_latest_all3.deb -O /tmp/omvextras3.deb
    dpkg -i /tmp/omvextras3.deb
    rm -f /tmp/omvextras3.deb
    /usr/sbin/omv-update

    #install openmediavault-flashmemory
    display_alert "Install openmediavault-flashmemory" "custom-image.sh" "info"
    apt-get --yes --force-yes --fix-missing --no-install-recommends install openmediavault-flashmemory
    sed -i '/<flashmemory>/,/<\/flashmemory>/ s/<enable>0/<enable>1/' /etc/openmediavault/config.xml
    /usr/sbin/omv-mkconf flashmemory

    # Tidy up
    display_alert "apt-get autoremove / autoclean" "custom-image.sh" "info"
    apt-get autoremove
    apt-get autoclean

    #remove first-login Script
    display_alert "Remove first-login Script" "custom-image.sh" "info"
    rm /root/.not_logged_in_yet

    #remove some services
    #ändern auf systemctl enable dienstname.service
    # /etc/systemd/system, /run/systemd/system, /usr/local/lib/systemd/system, and /usr/lib/systemd/system are four of those directories.
    display_alert "Disable Services - tftpd-hpa, proftpd, nfs-common, smbd, snmpd, ssh"
    #tftpd-hpa.service
    systemctl disable tftpd-hpa
    #proftpd.service
    systemctl disable proftpd
    #nfs-kernel-server.service
    #nfs-common.service
    systemctl disable nfs-kernel-server
    systemctl disable nfs-common
    #snmpd.service
    systemctl disable snmpd
    #samba.service
    systemctl disable nmbd
    systemctl disable samba-ad-dc
    systemctl disable smbd
    #ssh.service
    #systemctl disable ssh
    display_alert "SSH enable" "custom-image.sh" "info"
    systemctl enable ssh
    sed -i '/<ssh>/,/<\/ssh>/ s/<enable>0/<enable>1/' /etc/openmediavault/config.xml

    #FIX TFTPD ipv4?
    display_alert "tftpd-hpa ipv4 startup fix" "custom-image.sh" "info"
    sed -i 's/--secure/--secure --ipv4/' /etc/default/tftpd-hpa

    #adding omv-initsystem to firststart
    display_alert "adding omv-initsystem to firstrun" "custom-image.sh" "info"
    echo "/usr/sbin/omv-initsystem" >> /etc/init.d/firstrun


    #debug shell
    #/bin/bash
    ;;
    trusty)
    # your code here
    ;;
    xenial)
    # your code here
    ;;
    esac


  12. Like
    atomic77 reacted to zador.blood.stained in Reboot command   
    @immutability
    Try looking at output of
    systemd-analyze blame to check if any userspace service takes a long time to start, also you may want to run
    systemd-analyze plot > plot.svg and open plot.svg on another machine to see graphical representation of startup time of different services.
     
    I would guess that your delay is related either to waiting for network connection or to slow SD card read/write speed.
  13. Like
    atomic77 got a reaction from gounthar in Raw H264 encoding gstreamer   
    I know this thread is over a year old, but I was finally able to get decent camera output out of the GC2035 thanks to the cedar H264 gstreamer plugin linked here!
     
    I'm no gstreamer expert, but what i've been able to figure out is that after the cedar_h264enc stage of the pipeline, you need to add a h264parse stage, which you can then follow with something like matroskamux if you want to write a .mkv file, or rtph264pay if you want to send the data over the network via RTP. eg:
     
    gst-launch-1.0 -ve v4l2src device=/dev/video0 \ ! video/x-raw,format=NV12,width=800,height=600,framerate=15/1 \ ! cedar_h264enc ! h264parse ! queue \ ! matroskamux ! filesink location=test.mkv  
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines