• Content Count

  • Joined

  • Last visited

Everything posted by @lex

  1. The suggestion was to change to: powerdown-gpios = <&pio 4 15 1>; /* CSI-STBY-R: PE15 */ vfe_v4l2 is for legacy only. Can you set PG11 in u-boot and see if it works? I think is PD14, do it in u-boot cmd: gpio set PG11 gpio status PG11
  2. I don't have the board with me right now. But i still think i2c is not wired correctly. Perhaps someone else can measure and give the answer. Last thought: Try GPIO_ACTIVE_LOW if you have GPIO_ACTIVE_HIGH in your DTS
  3. Ouch, it is working then. The error is expected. In mainline you need to tell the sensor (csi) the format and size you want. Here is one example for streaming JPEG 1280x720: media-ctl --device /dev/media1 --set-v4l2 '"ov5640 1-003c":0[fmt:JPEG_1X8/1280x720]' For the mjpeg-streamer the fix is here: https://github.com/avafinger/bananapi-zero-ubuntu-base-minimal/issues/56 PS: I overlooked you output, i still think you wired the i2c1 and not i2c2
  4. Not sure i can give a good advice in this case, it is a bit confusing for me (software guy here). Looks like your sensor is detected with I2c-1 and it should be on i2c-2 , so i think it is the way you wired this module. PE12/CSI_SCK/TWI2_SCK --> CSI-SCK PE13/CSI_SDA/TWI2_SDA --> CSI-SDA and DOVDD, AVDD, DVDD should be in your DTS in the same way as Opi one. How you wired these pins to get the right voltage is a mystery to me.
  5. In order to use the imx219 sensor (or any other sensor) in mainline kernel 5.x you need to tell the sensor which format and size the sensor must deliver the frames using v4l2-ctl prior to grabbing the frames. Something like: v4l2-ctl --set-subdev-fmt pad=0,width=1280,height=960,code=0x3001 -d /dev/v4l-subdev1 v4l2-ctl --set-subdev-fmt pad=0,width=1280,height=960,code=0x3001 -d /dev/v4l-subdev0 v4l2-ctl --set-subdev-fmt pad=2,width=800,height=600,code=0x2008 -d /dev/v4l-subdev0 Collabora has a complete setup and a patched libcamera where you can pull some images
  6. You have the wrong pix_fmt. Try: -pix_fmt yuv420p Be aware you don't get HW encode with this version. Search the forum, someone already worked on the hw encoding side.
  7. I was an early backer of Pine64, it took so long to get my hands on one plus the learning curve in order to contribute with something useful, around 6 months, today things have evolved and 'this time' is an eternity. Not to mention the restriction to ship the battery from US. I just think they missed a great opportunity to create a kind of "convergence docker" that would export the gpios so you could drive other things, create an ecosystem around the device just like the Raspberry Pi has. The Makers should get this tip and launch a similar device and export all the pins. RK c
  8. Hi, I haven't follow the progress, but i like the Mobian w/ Phosh design, if you can report your findings would be great. Thanks. I have a few (noob?) question about the device and interfaces. * Would you know if it is possible to have access and use the UART to interface with some peripherals? I know there is an extension with HDMI, ethernet and USB. I don't mean USB to Serial. * Can you stick a barcode reader, or something similar like a proximity reader (if you have that extension)? * Last time i checked (was really long time ago) there was no
  9. That's why i asked. How would you wire the touch? I don't have OPI4 but i am curious where it should go to the OPI4. I am thinking to try it out on my NanoPi M4. And it would be interesting to know how to connect all these things, some wires do not show where it goes ... Make sure you have: &vopb { status = "okay"; }; &vopb_mmu { status = "okay"; }; &vopl { status = "okay"; }; &vopl_mmu { status = "okay"; };
  11. You can follow this thread and add the ov5640 overlay, use the ov5640 params from FE dts node.
  12. If you don't want to code, you could try this: https://www.claudiokuenzler.com/blog/999/kodi-tv-add-video-stream-ip-surveillance-camera-add-on
  13. Here are the instructions to run Htop remotely using a web browser. I often use Htop to monitor the health of my boards and servers (amd64). It is a good tool for the sys admin to monitor the servers in real-time without much resources and it is not very intrusive. Recipe 1 - Clone shellinabox root@cubieboard2:~# git clone https://github.com/shellinabox/shellinabox Cloning into 'shellinabox'... remote: Enumerating objects: 3073, done. remote: Total 3073 (delta 0), reused 0 (delta 0), pack-reused 3073 Receiving objects: 100% (3073/3073), 4.31 MiB |
  14. And the nice thing, you can have Htop in a Browser...
  15. ____ _ _ _ _ ____ / ___| _| |__ (_) ___| |__ ___ __ _ _ __ __| | |___ \ | | | | | | '_ \| |/ _ \ '_ \ / _ \ / _` | '__/ _` | __) | | |__| |_| | |_) | | __/ |_) | (_) | (_| | | | (_| | / __/ \____\__,_|_.__/|_|\___|_.__/ \___/ \__,_|_| \__,_| |_____| Welcome to Armbian 20.08.1 Focal with Linux 5.8.5-sunxi No end-user support: community creations System load: 0.27 0.62 0.34 Up time: 5 min Memory usage: 8 % of 990MB IP: CPU
  16. Try removing the config file with Htop closed: sudo rm /root/.config/htop/htoprc Start Htop and make your changes.
  17. This is a classical memory corruption. Htop has possibly crashed. During a crash Htop emits a backtrace with some info. If you have the backtrace info, please post here with your Htop version. You can also try a few things: * Remove every meter, F2 and delete all the meters, exit. Start again and add one cpu bar. If it is Ok then proceed with the rest. * If you have some skills build Htop with debug info, the backtrace will show the function previous to the free() memory.
  18. Can you post the format info for your USB camera? v4l2-ctl -d 4 --list-formats or v4l2-ctl -d 5 --list-formats I have seen some USB camera has a YUY2 format. Cpu usage ~52% looks good for OpenCV.
  19. Ok. In the JPEG_1X8/640x480@30FPS case you should expect a bit more cpu usage than the USB camera, say ~5% more cpu usage and not the ~35% increase you see. The reason is you get 640x480x3 pixels from the sensor instead of only the compressed image. I would build opencv with debugging info and try to find the bottleneck. And why there is an image conversion in the DVP case. Push the limits to 720P/30fps or even 1080P and see what you get.
  20. No GPU/VPU involved. I think there is a conversion from YUV to rgb in opencv and possible a decompression from jpeg to rgb, i am not an opencv expert, maybe someone can give more details about what's going on inside opencv. There is still room to optimize the JPEG_1x8. That's pretty good. There must be no conversion in the image format to achieve this. It is interesting to find out more about it. Can you share more about your application and your setup and usb camera?
  21. What are the compiler options in use? You could try to put a break before you call **WrTSpec** and check the stack. You can also use valgrind to check for stack corruption.
  22. Yes, it works fine, but be careful with mainline kernel AVDD / DOVDD / DVDD supply, i have burned out 6 sensors with the wrong settings. Search the forum for OV5640, i have advised one with ~130º lens if i remembered correctly. I would buy 2 or 3 from different sources (or different models) since you are not sure what you will get.
  23. This error is: /* No such device or address */ You should double-check: 1. Is your sensor OV5640? 2. Check the connector, some are reversed 180º which is the case for BPI and Orange Pi. There is a Thread about it, 3 years old a think.
  24. if you have pwm exposed you can control the speed and lower the noise.