Jump to content

ScottP

Members
  • Posts

    19
  • Joined

  • Last visited

Reputation Activity

  1. Like
    ScottP got a reaction from Willy Moto in RockChip RK3588_s geekbench benchmark score   
    Rock5 model B  is up for preorder now - pay $5 now and get $50 off when it ships in Q2 https://www.cnx-software.com/2022/01/09/rock5-model-b-rk3588-single-board-computer/?amp=1 Its going to be a beast - about 4x performance of rk3399 multi core and GPU up to 10x performance in some tests
  2. Like
    ScottP got a reaction from VyacheslavS in Mainline VPU   
    Hear is a lengthy reply I put on the Frigate NVR github for someone asking about hardware decoding for Frigate NVR. I would be interested if I am doing anything wrong here or I have missed a step.
     
    TL;DR It does not work reliably for me  ATM but this is the closest to working I have seen so far. Work is ongoing in linux kernel and FFmpeg, it may work reliably sometime in the future. When the kernel drivers are moved out of staging and the interface to them is stable I expect to see a pull request on the main FFmpeg git. This is a long reply with information to test because I am giving up at this point and moving to a different platform. I would be interested if you find a solution though, or that I have missed something - hence the detailed reply.
    For testing you can try this fork of ffmpeg https://github.com/jernejsk/FFmpeg It has v4l2-request and libdrm stateless VPU decoding built in using hantro and rockchip_vdec/rkvdec.
    use kernel 5.14.9, armbian is a convenient way to change kernels - sudo armbian-config -> system -> Other kernels.  FFmpeg from the above github has private headers for kernel interfaces and they are updated about a month after each release. You must install the correct userspace kernel headers, I just get the kernel source from https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.14.9.tar.xz and then do `make -j6 headers_install INSTALL_HDR_PATH=/usr`
    Do not use amrbian-config to install kernel headers - it installs the wrong version.
    Then install FFmpeg dependencies:
    `sudo apt install libudev-dev libfreetype-dev libmp3lame-dev libvorbis-dev libwebp-dev libx264-dev libx265-dev libssl-dev libdrm2 libdrm-dev pkg-config libfdk-aac-dev libopenjp2-7-dev`
    Run configure, this is a minimal set of options, frigate includes many more though, I removed many of them to build faster and save memory (I actually think there are a lot of redundant ffmpeg components in frigates default build files, some X11 frame grabber stuff and codecs nobody uses anymore, but thats for a separate discussion):
    ``` ./configure \ --enable-libdrm \ --enable-v4l2-request \ --enable-libudev \ --disable-debug \ --disable-doc \ --disable-ffplay \ --enable-shared \ --enable-libfreetype \ --enable-gpl \ --enable-libmp3lame \ --enable-libvorbis \ --enable-libwebp \ --enable-libx265 \ --enable-libx264 \ --enable-nonfree \ --enable-openssl \ --enable-libfdk_aac \ --enable-postproc \ --extra-libs=-ldl \ --prefix="${PREFIX}" \ --enable-libopenjpeg \ --extra-libs=-lpthread \ --enable-neon ```
    Then `make -j6`
    I dont know if this next bit is correct, but it works for me, I dont want to do `make install` just run the ffmpeg tests from the build directory, to run tests you must run `sudo ldconfig $PWD $PWD/lib*` first otherwise linker will not find libraries.
    If you want to try a different kernel version run `make distclean` in FFmpeg and run ./configure again. If FFmpeg fails to build it will be because private headers do not match kernel headers. errors like V4L... undefined etc
    Then you can do some tests and see if you get valid output, for example, this decodes 15s from one of my cams:
    `./ffmpeg -benchmark -loglevel debug -hwaccel drm  -i rtsp://192.168.50.144:8554/unicast  -t 15 -pix_fmt yuv420p -f rawvideo out.yuv`
    Checks to make during and after decoding: 
    Observe CPU usage, on my system rk3399 with 1.5Ghz little core and 2Ghz big core overclock I get between 17 and 25% cpu on one core, it varies if it runs on a53 little core or a72 big core. It should be better than that, I think its the way that the data is copied around in memory. Gstreamer or mpv attempt to do zero copy decoding so its more efficient. With software decoding CPU use is about 70% of one core. RK3328 does not have the two a72  cores and four a53 cores that RK3399 has, just four a53 cores so rk3328 about half as powerful as RK3399 as the a72 cores are about twice as powerful as the a53 cores.
    You should see in the debug output for ffmpeg where it tries each of the /dev/video interfaces to find the correct codec for decoding. Be warned that ffmpeg will sometimes just fall back to software decode, if that happens you will see much higher CPU usage and often ffmpeg will spawn a number of threads to use all cores in your system. Your user should be a member of the "video" group in /etc/group to access without sudo. Log snippet of that section below:
    ```[h264 @ 0xaaab06cd9070] Format drm_prime chosen by get_format(). [h264 @ 0xaaab06cd9070] Format drm_prime requires hwaccel initialisation. [h264 @ 0xaaab06cd9070] ff_v4l2_request_init: avctx=0xaaab06cd9070 hw_device_ctx=0xaaab06c549a0 hw_frames_ctx=(nil) [h264 @ 0xaaab06cd9070] v4l2_request_probe_media_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/media1 driver=hantro-vpu [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video1 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_try_format: pixelformat 875967059 not supported for type 10 [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: try output format failed [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video2 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_try_format: pixelformat 875967059 not supported for type 10 [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: try output format failed [h264 @ 0xaaab06cd9070] v4l2_request_probe_media_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/media0 driver=rkvdec [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video0 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_init_context: pixelformat=842094158 width=1600 height=912 bytesperline=1600 sizeimage=2918400 num_planes=1 [h264 @ 0xaaab06cd9070] ff_v4l2_request_frame_params: avctx=0xaaab06cd9070 ctx=0xffff8804df20 hw_frames_ctx=0xffff8804faa0 hwfc=0xffff8804e530 pool=0xffff8805e910 width=1600 height=912 initial_pool_size=3 ```
    Check that the output file contains valid video data, try playing it using vlc:
    `vlc  --rawvid-fps 10 --rawvid-width 1600 --rawvid-height 900 --rawvid-chroma I420 out.yuv`
    adjust the command to what height/width/fps your cameras record in.
    If all this is working then try doing longer decodes in parallel, eg is you have 3 cams run the ffmpeg command for each of them in a separate window and increase the time. What happens to me is that at some point ffmpeg will start reporting "resource not available/busy" or similar, rebooting will make it work for a while again. 
    You can check what codecs are supported by each of the interfaces /dev/video[012] by `v4l2-ctl --all -d0` change d0 to d1 d2 etc to view the other decoders/encoders
    You can monitor the state of kernel development https://patchwork.kernel.org/project/linux-rockchip/list/  Most of the work on this is being done by Andrzej Pietrasiewicz. My suggestion is monitor  both the ffmpeg github and kernel commits/patches, find out when they rebase ffmpeg. Pull that version and install the current kernel for it plus headers and retest.
    I have all the frigate docker files already created. I basically created a new set of  dockerfiles with an arch of aarch64rockchip and added those to Makefile. I'll upload them to my github at some point, I see little point to a pull request since rockchip is a niche platform with not many users in home assistant or frigate, and it does not currently work for me reliably anyway.
    I have been trying to get this working for some time now, at kernel 5.4.* there were a bunch of kernel patches you had to apply. Nothing worked for me then. Often FFmpeg complained about the pixel format. There were some people on Armbian forums who claimed to have it working, but I had my doubts, maybe it was wishful thinking and ffmpeg was really using software decode. Most of the effort around this is for video playback so people can play 1080p and 2/4k videos on desktop and  kodi. There is little information about straight decoding to a pipe like frigate. So in research ignore stuff to do with patched libva etc.
    For now I am using an old ~2013 i5-4670 four core/thread Haswell with Nvidia GT640 GPU for Frigate and Home Assistant. For three cams at 1600*900 10fps Frigate uses 6% CPU as reported by Home Assistant supervisor. It is very stable. With that in mind and wanting to use a more power efficient system I caved and ordered a Nvidia Jetson 4GB developer kit yesterday. I have confidence I can build Frigate docker containers for that system and it has a similar hardware decoder as their GPUs, I can also try out using CUDA filters and scaling to reduce CPU load for Frigate detector. A start would be to copy the amd64nvidia dockerfiles and create aarch64nvidia arch and modify from there it should be mostly the same.
     
     
  3. Like
    ScottP got a reaction from gounthar in Mainline VPU   
    Hear is a lengthy reply I put on the Frigate NVR github for someone asking about hardware decoding for Frigate NVR. I would be interested if I am doing anything wrong here or I have missed a step.
     
    TL;DR It does not work reliably for me  ATM but this is the closest to working I have seen so far. Work is ongoing in linux kernel and FFmpeg, it may work reliably sometime in the future. When the kernel drivers are moved out of staging and the interface to them is stable I expect to see a pull request on the main FFmpeg git. This is a long reply with information to test because I am giving up at this point and moving to a different platform. I would be interested if you find a solution though, or that I have missed something - hence the detailed reply.
    For testing you can try this fork of ffmpeg https://github.com/jernejsk/FFmpeg It has v4l2-request and libdrm stateless VPU decoding built in using hantro and rockchip_vdec/rkvdec.
    use kernel 5.14.9, armbian is a convenient way to change kernels - sudo armbian-config -> system -> Other kernels.  FFmpeg from the above github has private headers for kernel interfaces and they are updated about a month after each release. You must install the correct userspace kernel headers, I just get the kernel source from https://cdn.kernel.org/pub/linux/kernel/v5.x/linux-5.14.9.tar.xz and then do `make -j6 headers_install INSTALL_HDR_PATH=/usr`
    Do not use amrbian-config to install kernel headers - it installs the wrong version.
    Then install FFmpeg dependencies:
    `sudo apt install libudev-dev libfreetype-dev libmp3lame-dev libvorbis-dev libwebp-dev libx264-dev libx265-dev libssl-dev libdrm2 libdrm-dev pkg-config libfdk-aac-dev libopenjp2-7-dev`
    Run configure, this is a minimal set of options, frigate includes many more though, I removed many of them to build faster and save memory (I actually think there are a lot of redundant ffmpeg components in frigates default build files, some X11 frame grabber stuff and codecs nobody uses anymore, but thats for a separate discussion):
    ``` ./configure \ --enable-libdrm \ --enable-v4l2-request \ --enable-libudev \ --disable-debug \ --disable-doc \ --disable-ffplay \ --enable-shared \ --enable-libfreetype \ --enable-gpl \ --enable-libmp3lame \ --enable-libvorbis \ --enable-libwebp \ --enable-libx265 \ --enable-libx264 \ --enable-nonfree \ --enable-openssl \ --enable-libfdk_aac \ --enable-postproc \ --extra-libs=-ldl \ --prefix="${PREFIX}" \ --enable-libopenjpeg \ --extra-libs=-lpthread \ --enable-neon ```
    Then `make -j6`
    I dont know if this next bit is correct, but it works for me, I dont want to do `make install` just run the ffmpeg tests from the build directory, to run tests you must run `sudo ldconfig $PWD $PWD/lib*` first otherwise linker will not find libraries.
    If you want to try a different kernel version run `make distclean` in FFmpeg and run ./configure again. If FFmpeg fails to build it will be because private headers do not match kernel headers. errors like V4L... undefined etc
    Then you can do some tests and see if you get valid output, for example, this decodes 15s from one of my cams:
    `./ffmpeg -benchmark -loglevel debug -hwaccel drm  -i rtsp://192.168.50.144:8554/unicast  -t 15 -pix_fmt yuv420p -f rawvideo out.yuv`
    Checks to make during and after decoding: 
    Observe CPU usage, on my system rk3399 with 1.5Ghz little core and 2Ghz big core overclock I get between 17 and 25% cpu on one core, it varies if it runs on a53 little core or a72 big core. It should be better than that, I think its the way that the data is copied around in memory. Gstreamer or mpv attempt to do zero copy decoding so its more efficient. With software decoding CPU use is about 70% of one core. RK3328 does not have the two a72  cores and four a53 cores that RK3399 has, just four a53 cores so rk3328 about half as powerful as RK3399 as the a72 cores are about twice as powerful as the a53 cores.
    You should see in the debug output for ffmpeg where it tries each of the /dev/video interfaces to find the correct codec for decoding. Be warned that ffmpeg will sometimes just fall back to software decode, if that happens you will see much higher CPU usage and often ffmpeg will spawn a number of threads to use all cores in your system. Your user should be a member of the "video" group in /etc/group to access without sudo. Log snippet of that section below:
    ```[h264 @ 0xaaab06cd9070] Format drm_prime chosen by get_format(). [h264 @ 0xaaab06cd9070] Format drm_prime requires hwaccel initialisation. [h264 @ 0xaaab06cd9070] ff_v4l2_request_init: avctx=0xaaab06cd9070 hw_device_ctx=0xaaab06c549a0 hw_frames_ctx=(nil) [h264 @ 0xaaab06cd9070] v4l2_request_probe_media_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/media1 driver=hantro-vpu [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video1 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_try_format: pixelformat 875967059 not supported for type 10 [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: try output format failed [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video2 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_try_format: pixelformat 875967059 not supported for type 10 [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: try output format failed [h264 @ 0xaaab06cd9070] v4l2_request_probe_media_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/media0 driver=rkvdec [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video0 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_init_context: pixelformat=842094158 width=1600 height=912 bytesperline=1600 sizeimage=2918400 num_planes=1 [h264 @ 0xaaab06cd9070] ff_v4l2_request_frame_params: avctx=0xaaab06cd9070 ctx=0xffff8804df20 hw_frames_ctx=0xffff8804faa0 hwfc=0xffff8804e530 pool=0xffff8805e910 width=1600 height=912 initial_pool_size=3 ```
    Check that the output file contains valid video data, try playing it using vlc:
    `vlc  --rawvid-fps 10 --rawvid-width 1600 --rawvid-height 900 --rawvid-chroma I420 out.yuv`
    adjust the command to what height/width/fps your cameras record in.
    If all this is working then try doing longer decodes in parallel, eg is you have 3 cams run the ffmpeg command for each of them in a separate window and increase the time. What happens to me is that at some point ffmpeg will start reporting "resource not available/busy" or similar, rebooting will make it work for a while again. 
    You can check what codecs are supported by each of the interfaces /dev/video[012] by `v4l2-ctl --all -d0` change d0 to d1 d2 etc to view the other decoders/encoders
    You can monitor the state of kernel development https://patchwork.kernel.org/project/linux-rockchip/list/  Most of the work on this is being done by Andrzej Pietrasiewicz. My suggestion is monitor  both the ffmpeg github and kernel commits/patches, find out when they rebase ffmpeg. Pull that version and install the current kernel for it plus headers and retest.
    I have all the frigate docker files already created. I basically created a new set of  dockerfiles with an arch of aarch64rockchip and added those to Makefile. I'll upload them to my github at some point, I see little point to a pull request since rockchip is a niche platform with not many users in home assistant or frigate, and it does not currently work for me reliably anyway.
    I have been trying to get this working for some time now, at kernel 5.4.* there were a bunch of kernel patches you had to apply. Nothing worked for me then. Often FFmpeg complained about the pixel format. There were some people on Armbian forums who claimed to have it working, but I had my doubts, maybe it was wishful thinking and ffmpeg was really using software decode. Most of the effort around this is for video playback so people can play 1080p and 2/4k videos on desktop and  kodi. There is little information about straight decoding to a pipe like frigate. So in research ignore stuff to do with patched libva etc.
    For now I am using an old ~2013 i5-4670 four core/thread Haswell with Nvidia GT640 GPU for Frigate and Home Assistant. For three cams at 1600*900 10fps Frigate uses 6% CPU as reported by Home Assistant supervisor. It is very stable. With that in mind and wanting to use a more power efficient system I caved and ordered a Nvidia Jetson 4GB developer kit yesterday. I have confidence I can build Frigate docker containers for that system and it has a similar hardware decoder as their GPUs, I can also try out using CUDA filters and scaling to reduce CPU load for Frigate detector. A start would be to copy the amd64nvidia dockerfiles and create aarch64nvidia arch and modify from there it should be mostly the same.
     
     
  4. Like
    ScottP got a reaction from Willy Moto in Mainline VPU   
    Working for me on kernel 5.14.5 with ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2 h264 1600*900 15fps stream from my cams use 17-22% of one CPU about 5% of which is pxfmt conversion because it reduces by that amount without conversion - no further kernel patches and ffmpeg dependencies all installed from default repos. This is awesome I've been wanting this for a while, my use case is Frigate NVR which is currently running on an old intel system with a nvidia gpu doing the decoding. I can now revert my nanopc t4 to the task and save some electricity. 
    My ffmpeg command that emulates approximately what Frigate does:
     
    ffmpeg  -loglevel warning -hwaccel drm -i rtsp://192.168.50.144:8554/unicast  -pix_fmt yuv420p  -f rawvideo pipe:
     
    Many thanks @jernejkwiboo and everyone else who made this possible
     
    Now need to find out why get block i/o  errors and corruption on emmc on all 5.13 and 5.14 kernels I have tried, doing testing from sdcard for now.
  5. Like
    ScottP got a reaction from gounthar in Mainline VPU   
    Working for me on kernel 5.14.5 with ffmpeg https://github.com/jernejsk/FFmpeg/tree/v4l2-request-hwaccel-4.3.2 h264 1600*900 15fps stream from my cams use 17-22% of one CPU about 5% of which is pxfmt conversion because it reduces by that amount without conversion - no further kernel patches and ffmpeg dependencies all installed from default repos. This is awesome I've been wanting this for a while, my use case is Frigate NVR which is currently running on an old intel system with a nvidia gpu doing the decoding. I can now revert my nanopc t4 to the task and save some electricity. 
    My ffmpeg command that emulates approximately what Frigate does:
     
    ffmpeg  -loglevel warning -hwaccel drm -i rtsp://192.168.50.144:8554/unicast  -pix_fmt yuv420p  -f rawvideo pipe:
     
    Many thanks @jernejkwiboo and everyone else who made this possible
     
    Now need to find out why get block i/o  errors and corruption on emmc on all 5.13 and 5.14 kernels I have tried, doing testing from sdcard for now.
  6. Like
    ScottP got a reaction from TRS-80 in Orange Pi 4 PCIe Link Speed?   
    I have a NanoPC T4 (RK3399) with Friendly ARM ubuntu distro NOT Armbian as yet - I plan to migrate hence why I am reading these forums
    If this provides a data point?
    scott@hass:~$ sudo lspci -vv | grep -E 'PCI bridge|LnkCap' 00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 0100 (prog-if 00 [Normal decode])         LnkCap:    Port #0, Speed 5GT/s, Width x4, ASPM L1, Exit Latency L0s <256ns, L1 <8us         LnkCap:    Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L0s unlimited, L1 <64us scott@hass:~$ sudo hdparm -t /dev/nvme0n1p2 /dev/nvme0n1p2:  Timing buffered disk reads: 1784 MB in  3.00 seconds = 594.26 MB/sec  
     
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines