Jump to content

Myy

Members
  • Posts

    365
  • Joined

  • Last visited

Posts posted by Myy

  1. Interesting ! I'll try to include these patches and use them with the provided tools, this week.

     

    Did you port the MPEG-2 code using the RK3399 as a template or using the old chromium code provided by Tomasz Figa ? Or maybe another way ?

  2. Without checking, IIRC, registers places are different. This could be resolved through a few macros... But that generates another question : How to test the whole thing ? I'll have to look at bootlin repositories to understand how to use the whole thing.

  3. For what I understand, it seems to be related to HTML5 EME (Encrypted Media Extensions), and it's mainly used to provide DRM support.

    Basically, the media (movie) is encrypted, some decryption keys are provided by a DRM server (requiring authentication, and so...), and the decryption is being done by the hardware itself.
    The only implementation I've seen from this is included in the "chromium" engine.

    It seems to be mainly used to deter the "copy the movie by intercepting the stream".

    Anyway, I took a look today at the two VPU implementations sent on the linux-media and linux-rockchip mailing lists and...

    Notably patches providing v4l2_m2m_buf_copy_data . It has been suggested multiple times, with patches like this : https://patchwork.kernel.org/patch/10680123/
    Maybe it has been mainlined and renamed since then. I'll have to take a look around, see if I can just add the functions from the provided patches, or if adding these helpers is slightly more complex... or see how I can modify Ayaka patches to avoid these extensions.

    I could also try to add MPEG-2 support for RK3288 by mimicking how it's done for the RK3399 in Ezequiel patches but... this might take way more time than needed.

  4. Well, while this is called a "Video Processing Unit", the thing is : there's a LOT of video file formats out there. Which mean, a lot of different parameters and decoding/decompressing methods, based on the format used. (I mean, there are different formats and there are "different" for a reason...)

    All the VPU I know are specialized in decoding a few formats, at most : H264, H265, VP8, VP9, ...

     

    For each format, the VPU must be configured to access external data like : The current frame, the configuration of the current stream (Width, Height, Bytes per pixel, Color format, ...), the different decoding tables if any (e.g. CABAC tables for H26x), ... .

    The amount of external data and configuration vary from format to format, knowing that some formats can also have "sub-formats" (H264 is a good example of this madness) which require more or less parameters.

     

    So, yeah, VPU are dedicated to a few formats, and for each format, the setup can be completely different. That can be due to configuration registers being mapped at different addresses depending on the decoded format, or the same registers having completely different meaning depending on the format decoded.

     

    Note that, in this case, the VPU decode one frame per one frame.

    You cannot just "Send the MKV to the VPU, get a video stream on the other end". It *clearly* doesn't have enough memory for that.

    Very roughly, the procedure goes as is :

    First, the user application must :

    • Get the first frame of the video stream
    • Send it to the VPU driver

    Then VPU driver must :

    • Setup the VPU to decode the frame
    • Launch the VPU decoding process
    • Wait for the decoded result
    • Send back the result to the user application.

    Then user application :

    • Retrieves and shows the result,
    • Rinces and repeat for every frame of the video.

    So, yeah, VPU are not CODEC agnostics. They are CODEC specialized. So the driver is setup slowly, but surely, to decode each format correctly.

  5. You should like my Wayland example, then :3

     

    Anyway, I see that Ezecquiel Garcia is currently pushing patches to adapt the V4L2 patches from Ayaka, into something that works with V4L2 (and a few modifications) without the MPP layer in-between.

    He pushed support for MPEG-2 decoding support... I'll see if he pushes support for H264 this week.

    If not, I'll try to adapt Ayaka's patches.

  6. Well, I was able to use the EGL output after patching the display initialization method. I guess I should format the patch and send it to the MPV devs for review.

     

    https://gist.githubusercontent.com/Miouyouyou/b9273ee3d949db3e1eb12f6bf99c1101/raw/cbb7d31b5ed131b3e53086c97bf0870bc3e6b3e7/0001-Use-eglGetPlatformDisplay-when-possible.patch

     

    From 93a400edcabee9de0d6b464e081aa9c562085559 Mon Sep 17 00:00:00 2001
    From: Myy Miouyouyou <myy@miouyouyou.fr>
    Date: Fri, 1 Feb 2019 17:13:57 +0000
    Subject: [PATCH] Use eglGetPlatformDisplay when possible
    
    And then fallback on eglGetDisplay if the initialization fails...
    That said, currently the code only handle eglGetPlatformDisplay
    with GBMm in order to initialize displays with DRM/KMS backends.
    
    Signed-off-by: Myy Miouyouyou <myy@miouyouyou.fr>
    ---
     video/out/opengl/context_drm_egl.c | 11 ++++++++++-
     1 file changed, 10 insertions(+), 1 deletion(-)
    
    diff --git a/video/out/opengl/context_drm_egl.c b/video/out/opengl/context_drm_egl.c
    index 6aa3d95..de118a5 100644
    --- a/video/out/opengl/context_drm_egl.c
    +++ b/video/out/opengl/context_drm_egl.c
    @@ -158,9 +158,18 @@ static bool init_egl(struct ra_ctx *ctx)
     {
         struct priv *p = ctx->priv;
         MP_VERBOSE(ctx, "Initializing EGL\n");
    -    p->egl.display = eglGetDisplay(p->gbm.device);
    +    PFNEGLGETPLATFORMDISPLAYEXTPROC get_platform_display = NULL;
    +    get_platform_display = (void *) eglGetProcAddress("eglGetPlatformDisplayEXT");
    +    if (get_platform_display)
    +        p->egl.display = get_platform_display(EGL_PLATFORM_GBM_KHR, p->gbm.device, NULL);
    +    else {
    +       MP_ERR(ctx, "WHAT !?");
    +        p->egl.display = eglGetDisplay(p->gbm.device);
    +    }
    +
         if (p->egl.display == EGL_NO_DISPLAY) {
             MP_ERR(ctx, "Failed to get EGL display.\n");
    +        MP_ERR(ctx, "Error : %d\n", eglGetError());
             return false;
         }
         if (!eglInitialize(p->egl.display, NULL, NULL)) {
    -- 
    2.7.4

     

  7. MPP/RKMPP is the RocKchip Media Process Platform.

    A set of libraries, made by Rockchip, to communicate with their VPU driver. The thing is done in such a way that the "driver" basically only handle a few things like memory management.

    The actual registers of the hardware are known by MPP and are setup by this library, then sent to the driver which almost blindly write the registers values into the hardware, or read them back and send them back to MPP.

    Which mean that, even if you have the sources of the Rockchip VPU driver, you need the sources of MPP to understand how the hardware is actually programmed, based on the format you want to decode/encode.

    This is the kind of setup which make you wonder, who's the real "driver" ?

    http://opensource.rock-chips.com/wiki_Mpp

     

    FFMPEG is one the most famous multimedia processing library and tool. This thing can combine audio/video from different sources and combine/convert them into a LOT of formats.

    It comes as a library AND as a binary, which is one of the swiss-army knife for Audio-Video processing.

    https://ffmpeg.org/

     

    MPV is a Media Player, fork of Mplayer2, which use FFMPEG as a backend. It currently have a RKMPP backend to decode video frames using the RKMPP libraries.

    https://mpv.io/

     

    H264 is a video format.

    https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC

     

    The I-frames in H264 are reference (key) frames, from which other kind of frames (B/P frames) will be generated. The I-frame is basically the full frame, while the B/P frames are basically "patches" applied to I-frames to get the new picture.

    The "patches" being generally smaller than the I frame, you get one way to "compress" the video (upon various others used simultaneously).

    https://en.wikipedia.org/wiki/Inter_frame

  8. The crash happened at mpp_iommu_detach, so there was some driver communication... Maybe I could use that do dump the registers, when playing I-Frames only H264 file, and see how it enables and feed the hardware...

  9. After fighting to get FFMPEG compiled, MPV compiled (with Debian putting ffmpeg includes inside /usr/include/arm-what-ever-arch-hf/ and /usr/include/arm-what-ever-arch-hf/ having priority over /usr/include !) and then patching MPV to get the OpenGL display working correctly (stop using eglGetDisplay when you can use eglGetPlatformDisplayEXT !), I got the same error as you had...

     

    And then I realized that the error states that the buffer allocation failed...

     

    So I tried as root and it led to RKMPP failing to get the right frames in time. And then it crashed... badly.

     

    I'll retry with Gstreamer, since Rockchip seems to love gstreamer. And if that doesn't work, then the VPU is still in a shit state... But at least, they can communicate with it so I guess it's something...

  10. Wasted roughly 1 hour recompiling ffmpeg, to get "relocation R_ARM_THM_MOVW_ABS_NC against 'ff_vector_clip_int32_neon' can not be used when making a shared object; recompile wiht -fPIC"....

     

    Great...

    I'll recompile with "-fPIC" tomorrow...

  11. I guess this might need either a newer version of RKMPP, which can use the MPP-Service thing, or some setup that I have no idea about...

     

    Ugh... Couldn't they use V4L2 from the beginning... I'll give a recent compiled version of MPP (the library) tomorrow.

  12. Ah, the rockchip_vpu driver is the driver written by Ezequiel Garcia, which only supports JPEG encoding at the moment.

     

    The driver is named rockchip_mpp. You might want to disable the Rockchip VPU in the staging drivers, and enable Rockchip MPP which is the option at the bottom of the staging section.

    If you can't see it, maybe the integration within the Kconfig or Makefile isn't done correctly (´・ω・` ).

  13. Be sure that ROCKCHIP_MPP is enabled in the kernel configuration. It's not automatically set up.
    The driver is available in "Device Drivers -> Staging drivers -> Rockcihp MPP".

    Generally you get some warning, like for the Realtek staging driver, telling you that the code isn't that great, etc...

  14. If you get a kernel panic, just try to get a photo or a retranscription of what's happening.

     

    I'd highly recommend you to get an USB<->RS232 adapter that you can plug on your computer and the Tinkerboard, in order to get a serial output when kernel crashes. That way, you can launch "picocom" on your computer and do something like :

    picocom -b 115200 /dev/ttyUSB0

     

    And then copy-paste the big error message that appear on the serial console.

     

    To quit picocom, you'll need to do CTRL+A then CTRL+Q

  15. Okay, the driver is loading now... So now, I have to pull out a setup that can use "mpp service"... @JMCC Is your setup useable with MPP service ?

     

    I'll try to rewrite the patches and upload them, along with a complete build, in order to let people play with it, see if they can get something out of it.

     

    There's still a *BIG* fat warning [    6.565010] Failed to attached device ff9a0000.video-codec to IOMMU_mapping

    Now... The nodes have been split in two and ff9a0000.video-codec is the Video Encoder. The Video decoder is at ff9a0400.video-codec so... maybe it's working fine ?

    It's still highly probable that it will just crash in a very bad way.

     

    Anyway, I'll need some testers that are used to the whole MPP stuff.

  16. I tried to integrate the driver and I was able to compile it, after a few modifications, however while I can load it manually, it seems to fail (silently) to initialize the hardware, so I guess I need to review the DTS nodes (and the code).

     

    However, the driver still need modifications.

    1. #include <soc/rockchip/pm_domains.h> does not exist in mainline kernels.

    2. rockchip_pmu_idle_request does not exist in mainline kernels. However, I already have an old patch that adds it. That said, I'm not sure the added code is valid.

    Using my patch, you need to add #include <linux/rockchip_pmu.h> to each mpp_dev_ file... I guess I'll rewrite my patch to generate <soc/rockchip/pm_domains.h> instead and avoid additional modifications.

     

    So, yeah, it compiles, it loads, it does nothing. I'll have a look at the last part.

  17. I will give the RK3399 a try. Note that ayaka patches on the mailing list provided DTS bindings for RK3399, so those might be what you're looking for.

    https://lkml.org/lkml/2019/1/5/124

     

    About Chromebook specific kernels, you might be able to adapt the patches to mainline kernels and get them working. But the patched kernels are generally frozen way beyond mainline, so it might be better to just use Rockchip 4.4 kernel at that point.

     

    Now, the patch doesn't provide DTS bindings for RK3288 so I'll have to work on this, this week.

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines