Jump to content

Recommended Posts

Posted

Hi, I'm trying to get h264 encoding working but VAINFO keeps searching for rockchip_drv_video.so which is nowhere to be found on my system.

 

Is there a way to get h264 encoding working with armbian OR do I have to use radxa oses (or the ubuntu version provided by Joshua Riek) ?

 

Thanks.

Posted

Try building an image with vendor kernel and mesa-vpu extension enabled. Not sure if a pre-built one is available.

Posted
5 hours ago, dwarfman78 said:

Is there a way to get h264 encoding working with armbian

What I did for both NanoPi-R6C and Rock3A is install linux-image-vendor-rk35xx and jellyfin-ffmpeg7

 

The Zero 3W is rk3566 and should have upto 1080p60 H264 real-time encoding speed if you use h264_rkmpp as output codec.

 

Note that this is only CLI and file and/or stream based transcoding. I have no clue if it works in a webbrowser with camera and/or videoconferencing.
 

Posted
On 2/21/2025 at 1:35 PM, Werner said:

Try building an image with vendor kernel and mesa-vpu extension enabled. Not sure if a pre-built one is available.

 

Thanks, I am already using the latest vendor kernel, is there a way to enable mesa-vpu extension afterward or do I have to recompile the kernel entirely ?

Posted (edited)
vainfo --display drm --device /dev/dri/renderD128 | grep -E "((VAProfileH264High|VAProfileHEVCMain|VAProfileHEVCMain10).*VAEntrypointEncSlice)|Driver version"
libva info: VA-API version 1.20.0
libva info: Trying to open /usr/lib/aarch64-linux-gnu/dri/rockchip_drv_video.so
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit
vainfo --display drm --device /dev/dri/renderD129 | grep -E "((VAProfileH264High|VAProfileHEVCMain|VAProfileHEVCMain10).*VAEntrypointEncSlice)|Driver version"
libva info: VA-API version 1.20.0
libva info: Trying to open /usr/lib/aarch64-linux-gnu/dri/panfrost_drv_video.so
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit

 

Those files are not found :

sudo find / -name "*_drv_video.so"
/usr/lib/aarch64-linux-gnu/dri/radeonsi_drv_video.so
/usr/lib/aarch64-linux-gnu/dri/d3d12_drv_video.so
/usr/lib/aarch64-linux-gnu/dri/nouveau_drv_video.so
/usr/lib/aarch64-linux-gnu/dri/virtio_gpu_drv_video.so
/usr/lib/aarch64-linux-gnu/dri/r600_drv_video.so

 

Edited by dwarfman78
Posted

I've found this repository which provides the VAAPI driver :

https://github.com/qqdasb/libva-rkmpp

 

vainfo --display drm --device /dev/dri/renderD128 |                 grep -E "((VAProfileH264High|VAProfileHEVCMain|VAProfileHEVCMain10).*VAEntrypointEncSlice)|Driver version"

libva info: VA-API version 1.20.0
libva info: Trying to open /usr/lib/aarch64-linux-gnu/dri/rockchip_drv_video.so
libva info: Found init function __vaDriverInit_1_20
libva info: va_openDriver() returns 0
vainfo: Driver version: Rockchip Driver 1.0

 

However I have a new error :

 

libva info: VA-API version 1.20.0
libva info: Trying to open /usr/lib/aarch64-linux-gnu/dri/rockchip_drv_video.so
libva info: Found init function __vaDriverInit_1_20
libva info: va_openDriver() returns 0
[2025-02-26 23:58:44.486]: Info: vaapi vendor: Rockchip Driver 1.0
[2025-02-26 23:58:44.487]: Error: [AVHWFramesContext @ 0xaaaae273d480] Failed to create surface: 10 (attribute not supported).
[2025-02-26 23:58:44.487]: Error: [AVHWFramesContext @ 0xaaaae273d480] Unable to allocate a surface from internal buffer pool.
[2025-02-26 23:58:44.492]: Info: Encoder [vaapi] failed

 

Posted (edited)
14 hours ago, dwarfman78 said:
[2025-02-26 23:58:44.487]: Error: [AVHWFramesContext @ 0xaaaae273d480] Failed to create surface: 10 (attribute not supported).

Can you maybe tell us what your (end-user)  use case is?

It might be simply that some feature in HW codec is not supported.

What do you feed the encoder and how?

And what is your base working method if SW encoding ?  Maybe that must be done non-real-time and is that the reason you want HW? How can others reproduce?

Also that github is 9 years old. I see some V4L2, but the whole issue is that Rockchip is not V4L2. They have their own rkmpp standard. Same but worse and/or un-usable stuff from Allwinner. Amlogic I don't know. Qualcomm and RPi are V4L2 and also new Radxa Orion O6 SoC AFAIK.

Edited by eselarm
Posted
7 hours ago, eselarm said:

Can you maybe tell us what your (end-user)  use case is?

It might be simply that some feature in HW codec is not supported.

What do you feed the encoder and how?

I am trying to stream my display with sunshine (https://app.lizardbyte.dev/Sunshine/) which supports only nvenc,amd vce or vaapi.

With software encoding there's only 8 frames per second output and 50% CPU usage.

Posted

On https://docs.lizardbyte.dev/projects/sunshine/latest/  I see:

Quote

with support for AMD, Intel, and Nvidia GPUs for hardware encoding

so there is no Rockchip rkmpp, not even ARM. As I indicated, jellyfin has ffmpeg binaries that can use HW encoders in Rockchips

https://jellyfin.org/docs/general/administration/hardware-acceleration/rockchip

https://repo.jellyfin.org/?path=/ffmpeg/debian/latest-7.x/arm64

 

Those provide you a core method to grab screen as input and encoded h264 wih e certain container protocol as output. I think output is mostly FLV, RTMP for gaming. I use it for RPi cameras and/with NGINX.

Screen/display as input, see ffmpeg docs or see OBS as example. I have no clue what protocol etc sunshine uses. Maybe you configure software encoding but with a script hook somehow with ffmpeg. See how that can be done with MediaMTX for example.

Posted

That's interesting, I have already successfully compiled ffmpeg with mpp support, I know I can handle inputs with swayvnc and I will look into mediamtx, however I am a bit worried about latency which is critical for my use case. I'll keep you posted. Thanks.

Posted (edited)
On 2/27/2025 at 10:40 PM, eselarm said:

Those provide you a core method to grab screen as input and encoded h264 wih e certain container protocol as output. I think output is mostly FLV, RTMP for gaming. I use it for RPi cameras and/with NGINX.

Hi again,

 

somehow I am making progress, I was able to start a headless sway compositor then start recording the screen with wf-recorder which relies on ffmpeg, however, the video file generated has a very low framerate, something like 2 fps. Can you help me ?

EDIT : I have plugged the hdmi output to a real display and the overall fps are very low when wf-recorder is running (not only in test.mp4 but also on the real screen), when not recording the framerate is at about 60 fps.

 

here is my sway.cfg :

Quote

exec WAYLAND_DISPLAY=wayland-1
exec swaymsg create_output HEADLESS-1
exec wf-recorder -y -f test.mp4 -c h264_rkmpp

 

Edited by dwarfman78
Posted (edited)

I can post some piece of test shell script that I used to test live-transcoding DVB-T2, ran on rk3568 and rk3588s with vendor kernel 6.1.x. The input in my case is TVheadend, is a sort of generic HTTP streaming

Quote

    transcode_rkmpp)
        MEDIAURL=http://vserv.home/npo1
        /usr/share/jellyfin-ffmpeg/ffmpeg -y -loglevel info -nostdin\
        -t $NUMSECONDS\
        -c:v hevc_rkmpp\
        -i $MEDIAURL\
        -map v\
        -c:v h264_rkmpp\
        -b:v 8000k\
        -c:a aac\
        -b:a 128k\
        -t $NUMSECONDS\

        -f mpegts\
        ./npo1.h264.ts
 

Have a good look at ffmpeg comandline options (on ffmpeg docs website ) I would say, it is overwhelming, but take your time to tune and understand it. ffmpeg chooses mpegts as container format if the output file extension is .ts, but I added it now explicitly for you. If you want a RTMP server as output ( e.g. NGINX rtmp://vserv ... ) , use -f flv

Note that all containers have options, advantages, restrictions.

For this test example, 2x HW coding is used, that was my goal, so up to 60fps 1080p works fine on rk3568 and way more/higher on rk3588s. The CPU has almost nothing to do, only the audio is done in CPU. So you first need to make sure HW codec, not CPU, is used by ffmpeg. Then it still might be that the grabbing of screen is a bottleneck. I have no experiene with sway, wayland doing that on low-end ARM soc. Works OK on fast Intel PC X11 some years ago I used that.

Edited by eselarm
Posted

Some news about this topic, I have dropped wayland and went back to X11 with Xvfb+ffmpeg with x11grab and h264_rkmpp codec into Janus WebRTC through a RTP output.

 

Let's say it works, it is really slowing down the computer but I manage to stream at 30 fps, I continue my investigations to improve performances.

  • Solution
Posted (edited)

Ok so here is the definitive solution :

You need panfrost so use the correct armbian version with vendor kernel, then you are going to need to compile ffmpeg for rockchip, just follow those instructions :

 

https://github.com/nyanmisaka/ffmpeg-rockchip/wiki/Compilation

 

Once it is done, you might have some performance issues when capturing the screen with either x11 or Wayland, BUT ffmpeg allows you to capture directly from the device through kmsgrab

 

sudo setcap cap_sys_admin+ep /usr/bin/ffmpeg

sudo ffmpeg -device /dev/dri/card0 -f kmsgrab -i - -r 60 -vcodec h264_rkmpp -f rtp rtp://localhost:8004

 

There I am streaming to a rtp server but you can write into a file. Do not forget to use the -vcodec h264_rkmpp which will use hardware encoding through VPU.

 

Edited by dwarfman78
Posted

OK, this is great, sounds logical that KMS somehow needs to be used but I did not realize. A quick test on my NanoPi-R6C dumping to a .ts file (format mpegts) with ffmpeg from the jellyfin ffmpeg7 Debian package works fine, that is: I use multi-user.target, so CLI only, therefore only clear screen with Armbian bash login prompt with blinking cursor. CPU load is almost 0, CPU clock 408Mhz.

The Armbian installation is Bookworm with beta repo enabled, and I know it runs KDE Plasma as well since some months (with both latest vendor and latest mainline kernels). I have not looked at panfrost, I probably did half a year ago, but will need to look at my notes. So it is mainly an out-of-the-box action, except the installation of external jellyfin ffmpeg7 which has rkmpp included.

 

There might indeed be performance issues, but that is not really a surprise to me; An RK3566 is a cost-cut, lowest cost RK35xx, only (lower clocked) 4x Cortex-A55, you cannot expect too much of it compared to 4x Cortex-A55 + 4x Cortex-A76 + faster DRAM, etc found in RK3588. It all depends on what else it running, e.g. libreoffice slideshow or a heavy game.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines