Jump to content
  • 0

OrangePi - Kindly request to the owners of camera (both - usb webcam and CAM module)


abramq
 Share

Question

Hello,

I am going to make a project using OrangePi (most likely OrangePi Lite, but could be other model) with a camera. So, there is my supplication, for someone to make a test and check 

HOW MANY pictures (let's say 640x480) can the camera take PER ONE SECOND (working continuously, in a loop)?

I am interested in both types of cameras:  a USB webcam and OrangePi CAM module. And it's about taking pictures, not a movie.

 

By the way - it is possible to take pictures into RAM? I mean not writing it to SD card. It would improve performance :-)

 

Thanks to everyone in advance!

 

Link to comment
Share on other sites

5 answers to this question

Recommended Posts

  • 0

Theoretically you can take from 25 to 30 pictures per second (fps = frames per second) with picture size of 640x480, depending on sensor you use, OV5640 CMOS sensor or GC2035 CMOS sensor (OrangePi camea), YUV format.

USB cameras are usually slower than this but has the advantage of delivering the pictures in JPG format, reducing cpu usage and bandwidth.

 

Choosing the right one depends on your application requirements and price target.

You can do a benchmark analysis using any v4l2 capture program, for CMOS sensor you can try this: https://github.com/avafinger/cap-v4l2 , it may also works for USB cameras with little change.

 

 

Link to comment
Share on other sites

Armbian is a community driven open source project. Do you like to contribute your code?

  • 0

I've been doing a bit of work to get gstreamer-1.0 working with my Nano Pi NEO Air (AW H3).  I can pull in about 11fps at 1920x1080 and 30fps at 1280x720 with CAM500B (OV5640) using the following command:

 

GST_PLUGIN_PATH=/usr/local/lib/gstreamer-1.0  gst-launch-1.0 -vem v4l2src ! video/x-raw,format=NV12,width=1920,height=1080,framerate=30/1 ! cedar_h264enc  ! h264parse !  fpsdisplaysink text-overlay=false video-sink=testsink

 

gst-plugin-cedar modifications are here: https://github.com/gtalusan/gst-plugin-cedar

sun8i kernel modification here: https://github.com/gtalusan/sun8i-linux-kernel

 

I've submitted a pull request to Armbian too here: https://github.com/igorpecovnik/lib/pull/655

 

Right now the V4L2 buffers are backed by a DMA buffer as far as I can tell.  This is mmap'd into userspace memory and then memcpy'd into another DMA buffer for VPU/VFE/ION H264 encoding.  The memcpy could maybe disappear if the physical address of the DMA buffer was passed along via V4L2 API, but I haven't found a clean way to do that.  FFMPEG-Cedrus also has the same problem.  Commenting out the memcpy in gst-plugin-cedar (and hence H264 encoding garbage) drops my CPU load down to about 5-10% so there's definitely room for improvement if the DMA buffers can be shared.

Link to comment
Share on other sites

  • 0

Thanks a lot for your ample answers. 

I must tell you what it the "secret" of my project.

The task is seemingly simple - streaming video from the camera, but with "real time" control of "sureness" of that video. I mean to deliver a preview from the camera to the end point, shifted in the time as low as possible (<1 sec), with showing that time shift value on-line, and showing other parameters of streamed video, like dropped frames count etc.

So, I don't want to use streaming applications like gstreamer, v4l2, motion, etc. because of:

1. (With all due respect to OpenSource) not supported, rarely updated, different behaviour on different devices (sometimes it is a hard way to force application to work at all)

2. Difficult to achieve control of stream (as I need) - they use to use just web browser to see the movie at the endpoint

 

So, then, the idea is to make as many pictures as possible and send it immediately to the endpoint, together with some metadata which will give the option to that endpoint to check how the "stream" goes.

 

I'm a little surprised because I did not know that dedicated camera module (I will use GC2035 because they sell it with OrangePi Lite) does not compress photos. I must say that 25-30 pictures per second would be great for me (the minimum is 10-15), but if I add compressing time (YUV to jpeg) there will be a little more delay added. I am also not sure if cheap usb  webcams, like webcam will give a jpeg pictures, maybe I would need something like Logitech C270 what multiplies costs : -(

 

I know, that proper video streaming gives a possibility to make better compressing of moving pictures, but because of that two points I described, I am looking for other solution.

The OrangePi Lite + GC2035 (or webcam) seems to be the cheapest set (NanoPi has more expensive CAM module). Other sets are much more expensive (RPi), or too weak (Omega2), or difficult to buy here (C.H.I.P.)

 

Please, let me ask one more time - it is possible to take pictures directly to RAM, not to flash memory - to save time, because pictures will be transmitted immediately (no need to store them).

 

Link to comment
Share on other sites

  • 0

Hello,

to save time, maybe it's easy to simlink  to /tmp/ to write in ram instead flash.
I agree, make realtime (<1sec delay) is hard and need lot of work.
I already try myself with libffmpeg to decode h264 on end point but because buffering and video data topology add a long delay unwanted.
There are some camera with good quality about 10$ (Module ip camera hi3518E V200 Onvif protocol) with dual output RTSP protocol - 1MP for recording and low quality to make preview or triggering processing like advanced motion capture.
It's easy to save in mpeg_ts format (adding the minimal delay to have main frame, 2sec that was so bad realtime experience) but video quality is good.

In my opinion the better way to have realtime with web cam is to make an kernel driver in order to reuse webcam driver and linking to tun tap driver - like that you can take data at low level and maybe a DMA to send to tun tap interface on a specified ip adress and port when you load kernel module...
Easy to say, hard to make :)

Link to comment
Share on other sites

  • 0

There is an specification on onvif cam module named JPEG over RTP, it's the better way to offload the cpu and make realtime (H264 has less bandwitch but more delay) sending data to client but i don't know how get this steam on chinese onvif cam...
Any idea about that?

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...