Jump to content

atomic77

Members
  • Posts

    14
  • Joined

  • Last visited

Profile Information

  • Location
    Toronto

Contact Methods

  • Github
    https://github.com/atomic77

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thank you for sharing this! I have a DHT22 that i've been trying to get working with an OrangePi Lite and I've had terrible results with all of the python-based libraries i've tried. I'm getting about 25-30% of the readings with this every 2s, which is good enough for my purposes. FWIW, my opi lite is showing about 5.2 and 3.8v for the 5V and 3.3V pins
  2. I tried a build of 5.14 and made a couple of adjustments in u-boot-rockchip64-edge/add-board-orangepi-r1plus to include the rk3328-nanopi-r2s.dtb file instead of the rk3328-nanopi-r2-rev00.dtb, but the board doesn't come up and has the same issue reported originally: What I don't quite get is why nanopi r2s is the basis for the r1 plus image, though the device tree seems to be quite different from the rock pi e, which seems to work pretty well without any adjustments at all?
  3. I finally had some time this weekend to dig into this. It's my first exposure to the armbian kernel build process and device tree files, so please forgive any stupid questions! I made a first attempt at the changes on my github fork. I was not able to get the kernel built if I completely removed the nanopi-r2s patch as there was some extra dependencies on rockchip-ddr.h. So I started by removing the rev00/rev20 DTS files that seemed to conflict with the mainline, and tried to find any references to the old files and change them. What isn't clear to me is should I be also rewriting the u-boot patch files based on the upstream kernel version as well ? Are those copied completely from the source tree? When I try to boot, I get an error - is there a way I can get more details on what has gone wrong? I hope I'm not too far off course with my changes, any help is appreciated!
  4. I recently got my hands on an R1+ and was able to get it booted with a RockPi-E buster image, though I get the same error when I try to use any of the orangepi-r1plus images. Since I have the hardware, I'd be open to making the needed changes to get this particular board to work in a PR to the armbian/build repo. Can anyone give me any pointers to what might need to change in the orangepi-r1plus image to get there?
  5. Did anyone make any progress on this? I'm also trying to connect a Pi One Plus to a newer 4K-enabled TV and none of the tweaks to boot.cmd or armbianEnv.txt I've found on the forum seem to be working. Everything is fine on my 1920x1080 monitor over HDMI.
  6. Confirmed that I can't reproduce this on Stretch 5.4.45. I'm leaning towards just disabling zram on 5.8 kernels - as far as I can tell, the main thing I need to watch out for is that I don't fill up 50MB of logs within the 15 minute interval of armbian-truncate-logs being run (now that it's not compressed)?
  7. Awesome, this does look related. I'm going to check a fresh image of Armbian 20.05 with stretch 5.4.45 kernel and confirm that this doesn't occur. Thanks!!
  8. Hi all, I've been doing some stress tests to trigger the watchdog on an Orange Pi Zero and I discovered that i can reliably reproduce a kernel oops with a simple fork bomb on a fresh install of the latest Armbian Buster image (Armbian_20.11_Orangepizero_buster_current_5.8.16.img.xz) The command run as root is: :(){ :|: & };: I noticed zs_malloc in the stack trace, and interestingly enough, it does not happen if I disable zram in /etc/default/armbian-zram-config. I get a flood of errors like below, and the device eventually recovers. -bash: fork: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: retry: Resource temporarily unavailable -bash: fork: Resource temporarily unavailable I know the obvious answer is "don't do that" My memory requirements for this pi zero are modest, but stability is much more important. Can anyone suggest a good reason to not disable zram after seeing this? I also wonder where the right place to report this issue would be. Thanks, Alex
  9. atomic77

    atomic77

  10. I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing. Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way! Use a 3.4 kernel with custom GC2035 driver Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running: sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2 Install Tensorflow lite runtime Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough) wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl Build opencv for python 3.5 bindings This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function. To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O ) cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages Build gstreamer plugin for Cedar H264 encoder This is required to get a working gstreamer pipeline for the video feed: git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc Processing images The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer: Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was: src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink) This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera: def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB) Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU. There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores. For more details, including a detailed build script, the full source is here: https://github.com/atomic77/opilite-object-detect
  11. I was trying to do the same thing with my pihole running debian. Disabling the wpa_supplicant service didn't work for me because it seemed to get dragged up by polkit. Did you solve the problem? The only way I could figure out to prevent wpa_supplicant from coming up was to disable network-manager entirely. Since I set my ip statically with interfaces file it's ok for me but I imagine there is a better way.
  12. Hello all, I recently went through the exercise of getting the Armbian build system set up to create an image with some of my user customizations, similar to the example with OMV. The documentation and forums have been a huge help! As I am running Fedora and Windows as my main OSes, I did struggle a bit at first. I tried, in order: Ignoring the advice of the documentation, and running ./compile.sh on Fedora. Didn't get far Running the build with Docker on Fedora. I ran into a number of different problems and ultimately gave up. Running on Ubuntu 18.04 on WSL2. I did eventually get a build to complete, but the image may have been corrupt because my device didn't boot at all Considered using Vagrant, but decided to give Multipass a try. Worked like a charm on the first try on both my environments! I didn't find anything on the forums or documentation about using multipass to manage build VMs, so I decided to create a small gist here documenting how I got it working: https://gist.github.com/atomic77/7633fcdbf99dca80f31fd6d64bfd0565 For any not familiar, multipass is a Canonical tool that is optimized for creating Ubuntu VMs, which seems perfect for Armbian since that's the preferred platform anyway. Since it's so focused, it's very simple to use and all the utilities like shared mounts work seamlessly compared to Vagrant. If this could be helpful to others, I'd be happy to incorporate any feedback into a page for the documentation.
  13. Thanks for the reply. As soon as I get my hands on one of these SD cards i'll post my findings here on whether I get any useful information out of it with smartctl.
  14. Hello all, I've been recently burned by a failing SD card (armbianmonitor -v possibly saved me a much worse fate!) and I'm now trying to proactively avoid a similar situation. I found that WD has a line of cards that claim a health status feature that "Helps in preventive maintenance by signaling when the card needs to be replaced". One of them is the Purple QD101 that seems reasonably priced at $15 for 64GB. I'm not finding much in the way of details of how this is exposed though. Is there any way to get access to this information on linux?
  15. I know this thread is over a year old, but I was finally able to get decent camera output out of the GC2035 thanks to the cedar H264 gstreamer plugin linked here! I'm no gstreamer expert, but what i've been able to figure out is that after the cedar_h264enc stage of the pipeline, you need to add a h264parse stage, which you can then follow with something like matroskamux if you want to write a .mkv file, or rtph264pay if you want to send the data over the network via RTP. eg: gst-launch-1.0 -ve v4l2src device=/dev/video0 \ ! video/x-raw,format=NV12,width=800,height=600,framerate=15/1 \ ! cedar_h264enc ! h264parse ! queue \ ! matroskamux ! filesink location=test.mkv
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines