Jump to content

usual user

Members
  • Posts

    423
  • Joined

  • Last visited

Posts posted by usual user

  1. 8 hours ago, Dantes said:

    Any idea's which package I am missing?

    Not a missing package, it is the lack of basic support for V4L2-M2M hardware-accelerated video decode  in the application in the first place. The next showstopper would be the lack of V4L2-M2M support in the ffmpeg framework. There are some hack patches in the wild, but no out-of-the-box solution. The next showstopper would be the lack of kernel support for the SOC video decoder IP.
    The lack of support in the application is a general problem for all devices that provide their decoders via V4L2-M2M. The lack of support in the ffmpeg framework is a problem for all applications that base on it. Gstreamer-based applications, on the other hand, offer out-of-the-box solutions, as the necessary support is already integrated into the mainline gstreamer framework.

  2. On 5/31/2023 at 9:17 PM, robertoj said:

    Usualuser… can you share the mesa and libgbm version you are using?

     

    dpkg -l | grep libgbm
    bash: dpkg: command not found...

    I'm currently running mesa 23.0.3 and since libgbm is an integral part of mesa,  the version is of course identical. However, the current version is not really important, as the necessary API support is already very mature. It is only important that it is built with the current headers of its BuildRequires to represent the status quo.

     

    On 5/31/2023 at 9:17 PM, robertoj said:

    i have a feeling that we must do the same, but compiling ffmpeg and mpv with the latest available libgbm-dev in place… perhaps just mpv.

    Because the API between gbm and mpv does not change, but possibly by applying kernel patches between kernel and gbm (sun4i-drm_dri), it is probably more appropriate to rebuild mesa with the updated kernel headers.
    BTW, to get retroarch I would do:

    dnf install retroarch

     

  3. 12 hours ago, mrfusion said:

    So what is going on here? Am I missing a package or kernel module (concerning drm?)?

    IMHO, you're using a software stack that's way too outdated. You are missing features and improvements that have already landed. I'm not an expert in analyzing compliance logs, but if you compare your log to the one provided by @robertoj above, you'll see that your kernel at least lacks features that its provides:

     Format ioctls:
    @robertoj    test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
    
    @mrfusion        fail: v4l2-test-formats.cpp(263): fmtdesc.description mismatch: was 'Sunxi Tiled NV12 Format', expected 'Y/CbCr 4:2:0 (32x32 Linear)'
    @mrfusion    test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: FAIL
    
    @robertoj    test VIDIOC_G_FMT: OK
    @robertoj    test VIDIOC_TRY_FMT: OK
    @robertoj    test VIDIOC_S_FMT: OK
    
    @mrfusion        fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT
    @mrfusion    test VIDIOC_G_FMT: FAIL
    @mrfusion        fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT
    @mrfusion    test VIDIOC_TRY_FMT: FAIL
    @mrfusion        fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT
    @mrfusion    test VIDIOC_S_FMT: FAIL

     

  4. On 5/9/2023 at 12:59 AM, robertoj said:

    but LibreElec’s windowing system is GBM, not X11… so that’s a thing that I need to follow… somehow.

    You want a working GBM as the first step. An application can use the APIs provided by the kernel via GBM. It becomes the master user of the drm/KMS, i.e. if e.g. Kodi uses the display subsystem via GBM, it is the only user who has access to it. Other graphics applications will not be able to use the display subsystem at the same time. A desktop environment framework accesses the display subsystem using the same kernel APIs as GBM, and therefore becomes the master of drm/KMS. However, it manages the display subsystem resources so that multiple applications can use them simultaneously. For example, Kodi can be displayed in one window and a web browser in another window at the same time.
    When it comes to choosing a desktop environment, Xwindow is a bad choice. It was developed for x86 architecture and therefore cannot deal efficiently with these new requirements due to its design. Their developers at that time also recognized this and therefore developed Wayland from scratch. So when it comes to efficiently using a video pipeline in combination with a display pipeline, a desktop environment with Wayland backend is the first choice.
    But the good thing is, nowadays the majority support in current mainline is already available out-of-the-box. So in order to support a particular device, it is only necessary that a mainline kernel with the necessary drivers is available and everything else is just generic up-to-date mainline software. For your device you have confirmed that LE works with a mainline kernel, so all necessary code should be available, only it is not yet integrated out-of-the-box into the mainline kernel.

  5. 13 hours ago, robertoj said:

    I want to buy an orangepi H3 with HDMI port, so that I see if there's any difference in hardware decoding.

    With regard to the hardware decoder, this will not provide any new insights. For the video decoder it is only relevant that the subsequent display pipeline can process the decoder output format. Be it GBM, Wayland, Xwindow or /dev/null.
    See here for comparison a gstreamer video pipeline (gst-play-1.0-pipeline.pdf). It is always identical, regardless of which display pipeline (GstXvImageSink) ultimately comes to fruition. The decoder component (v4l2slh264dec) is backed by the respective hardware and is unmodified interchangeable as long as v4l2-compliance shows sufficient driver conformity.

     

    13 hours ago, robertoj said:

    But the ultimate goal is that I get hardware decoding in analog video.

    No, the ultimate goal is to compose a video pipeline that is backed by a hardware decoder and to have a display pipeline that can display the format provided by the video pipeline in a hardware-accelerated manner.

     

    13 hours ago, robertoj said:

    I will see whether "dnf builddeps" has an equivalent in Debian

    If I keep seeing the long dependency lists that are given to install when it comes to building a software, I would guess that it does not exist. But since this is about the kernel package, I don't see the problem here because it is self-contained and does not require any external dependencies. I see here rather the problem of using newly built kernels comfortably parallel in Armbian. This is due to the boot method and the way Armbian installs its kernel (a single kernel, not multiple kernels in parallel). But this can be changed, as I have already demonstrated several times in other forum threads.

     

    13 hours ago, robertoj said:

    if ffmpeg hardware decoding works with "-f null -", but fails to present the video (my situation), then the problem might be the display driver:

    This works because the subsequent display pipeline (dev/null) can handle any format.
    The display driver is only one part of the display pipeline, but probably the component that does not yet provide all the necessary prerequisites. This will certainly be added in LE with their kernel patches so that they can use a hardware-accelerate display pipeline. A display pipeline has no business with video decoding, we're not in x86 architecture, it deals only with already  decoded content.

  6. On 5/2/2023 at 3:40 AM, robertoj said:

    What would be the closest orange pi to orange pi zero?

    Sorry, but I don't quite understand what you're asking for here. When it comes to improving the display subsystem, I would start by rebuilding my kernel package. Since you have confirmed that LibreElec works, my first approach would be to apply all relevant LE patches. But I understand that it is apparently not so easy to implement with your build system.

  7. 5 hours ago, Schwarzy said:

    Any idea how to make gstreamer use the VPU correctly ?

    Media support for the rk3399 SOC is working since a long time with pure mainline code. With a modern software stack and current mainline releases, this even works out-of-the-box. Only the HEVC decoder support in the kernel has not yet landed, but there are WIP patches that make this usable also. See gst-play-1.0-pipeline.pdf for reference.

  8. 7 hours ago, robertoj said:

    I didn't have: vf_deinterlace_v4l2m2m

    Hmm, when I take another look at my previously posted ffmpeg.spec addition, I see  that I also carry a patchset with the name like that from a branch of @jernej github. Furthermore, I see that I also apply other patches from the v4l2-drmprime-n5.1.2 branch. Do you have similar patches in your portfolio?

  9. The download link was only included because the file was already stored for another purpose and was intended to demonstrate a real-world example. The file could possibly be used for a quick test, but is not suitable for permanent integration into Armbian as is. I usually upload files that should remain available for Armbian in the long term in the forum. I always have my doubts about binary files, because their validity is not easy to verify, so I think the warning on the download page is just right. Since the file can only be found via the reference link, it is just as trustworthy as if it were deposited directly in the forum and the user should be aware of what he is getting.

  10. 8 hours ago, robertoj said:

    run on the OrangePiZero, and provides hardware H264, in the analog video output.

    If everything is based on mainline code, it is only a matter of reverse engineering the build configuration to replicate it in your own build system. However, you should check if LE is still applying patches to the drm/kms display subsystem in the kernel, as I don't know what the out-of-the-box support is for your device in this regard. This could also be a reason why the VPU cannot be used. And that's not something where userspace code is involved.

  11. 11 hours ago, hi-ko said:

    they suggested to send the ssd in.

    Wow, great way to get access to all user data. Hope that there were no confidential data or passwords, because their integrity is no longer guaranteed. And if the NVME is already "read only", the user wouldn't even be able to remove them beforehand.

  12. 17 hours ago, robertoj said:

    is there any need to “—enable-v4l2”?

    I don't know the inner structure of ffmpeg and I don't know what actions trigger corresponding compiler switches, but since the request-api is only an extension in v4l2, I would guess that the base v4l2 support is also needed.

     

    17 hours ago, robertoj said:

    same orangepi zero model, with the same CVBS pal/ntsc output, does hardware h264 everyday

    Does this system run on current mainline software or does it use legacy code?

  13. I don't know much about the ffmpeg framework and even less about the interpretation of its protocols, but IMHO the display subsystem ([vo/gpu/opengl/kms] Selected mode: PAL (720x576@50.00Hz)) does not support the necessary prerequisites to serve the hardware decoder output. That is the reason why the VPU is not selected. I don't know if this is a hardware limitation, or just lack of display driver support for your device. As @jernej has confirmed, drm_prime is necessary for VPU usage and this is not mentioned in your logs.

    Xwindow is also only based on drm/kms and therefore cannot bring any improvement. Even if the display subsystem provides all the prerequisites, it is not a good choice, as it does not work efficiently with VPUs by design. Here, a desktop environment based on a Wayland backend delivers better results.

    On 4/26/2023 at 9:10 AM, robertoj said:

    what ubuntu or debian did you build with armbian-build?

    Neither, I haven't even built my own system. Debian based systems are IMHO way too stable - outdated. I prefer a system that follows mainline releases more closely.
    I'm using an RPM-based system and I just need to recompile the ffmpeg source package that has been supplemented with my extensions.

    All I have to do is to install the ffmpeg-5.1.2-3.fc37.src.rpm, copy the patches into the ~/rpmbuild/SOURCES directory and make these changes to the ~/rpmbuild/SPECS/ffmpeg.spec file:

    Spoiler
    diff ffmpeg.spec.orig ffmpeg.spec
    126a127,131
    >.
    > Patch100:       v4l2-request-n5.1.2.patch
    > Patch101:       v4l2-drmprime-n5.1.2.patch
    > Patch102:       vf-deinterlace-v4l2m2m-n5.1.2.patch
    >.
    175a181
    > BuildRequires:  pkgconfig(libudev)
    366a373,374
    >     --enable-libudev \\\
    >     --enable-v4l2-request \\\
    523a532,538
    > * Thu Dec 29 2022 usual user <usual.user@nowhere.org> - [5.1.2-3]
    > - BuildRequires:  pkgconfig(libudev)
    > - Apply v4l2-request-n5.1.2.patch
    > - Apply v4l2-drmprime-n5.1.2.patch
    > - Apply vf-deinterlace-v4l2m2m-n5.1.2.patch

     

    After that, a "dnf buildep ~/rpmbuild/SPECS/ffmpeg.spec" installs all the necessary dependencies and I can leave everything else to my build system with "rpmbuild -ba ~/rpmbuild/SPECS/ffmpeg.spec".
    Because this procedure can be applied to any RPM package, I can rebuild any package in my distribution to make changes if necessary.
    Since only a small proportion of packages are device-dependent, it is absolutely not necessary to compile a distribution completely by myself. I have to build only the kernel and firmware (uboot-tools) packages. In addition, there are mesa, gstreamer1 and ffmpeg to be up to date, but these are not really device-specific. I don't have to build mesa and gstreamer1 myself anymore because current releases already contain everything I need. So I have a generic system that works for all devices of an architecture, as long as there is sufficient mainline support in kernel and firmware.
    I use this system for my CuBox, my HummingBord, my NanoPC-T4, my Odroid-N2+ and my Odroid-M1. All devices of the same architecture can use the same SD card. And since I can build firmware for Odroid-C4, Odroid-HC4 and ROCK PI 4B, whose functionality is confirmed, my SD card will also work there in the same way. In order to be able to support other devices, only a suitable firmware and a corresponding DTB is necessary.
    As soon as the rkvdec2 driver support for an rk35xx SOC becomes available in the kernel, my Odroid M1 will also work with it without further intervention, since all userspace programs are already available and already use the other decoder IPs today. They just need to select a different dev/videoX.

     

    On 4/26/2023 at 9:10 AM, robertoj said:

    can you share your ffmpeg banner (the text identifying itself and the ./configure options)?

    This is the outcome of my build:

    Spoiler
    ffmpeg version 5.1.2 Copyright (c) 2000-2022 the FFmpeg developers
      built with gcc 12 (GCC)
      configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg
     --docdir=/usr/share/doc/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64
     --mandir=/usr/share/man --arch=aarch64 --optflags='-O2 -flto=auto
     -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall
     -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS
     -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong
     -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -mbranch-protection=standard
     -fasynchronous-unwind-tables -fstack-clash-protection' --extra-ldflags='-Wl,-z,relro
     -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld
     -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -Wl,--build-id=sha1 '
     --extra-cflags=' -I/usr/include/rav1e' --enable-libopencore-amrnb
     --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3
     --enable-bzlib --enable-chromaprint --disable-crystalhd --enable-fontconfig
     --enable-frei0r --enable-gcrypt --enable-gnutls --enable-ladspa --enable-libaom
     --enable-libdav1d --enable-libass --enable-libbluray --enable-libbs2b
     --enable-libcdio --enable-libdrm --enable-libjack --enable-libfreetype
     --enable-libfribidi --enable-libgsm --enable-libilbc --enable-libmp3lame
     --enable-libmysofa --enable-nvenc --enable-openal --enable-opencl --enable-opengl
     --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse
     --enable-librsvg --enable-librav1e --enable-librubberband --enable-libsmbclient
     --enable-version3 --enable-libsnappy --enable-libsoxr --enable-libspeex
     --enable-libsrt --enable-libssh --enable-libtesseract --enable-libtheora
     --enable-libtwolame --enable-libvorbis --enable-libudev --enable-v4l2-request
     --enable-libv4l2 --enable-libvidstab --enable-libvpx --enable-vulkan
     --enable-libshaderc --enable-libwebp --enable-libx264 --enable-libx265
     --enable-libxvid --enable-libxml2 --enable-libzimg --enable-libzmq --enable-libzvbi
     --enable-lv2 --enable-avfilter --enable-libmodplug --enable-postproc
     --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug
     --disable-stripping --shlibdir=/usr/lib64 --enable-lto
      libavutil      57. 28.100 / 57. 28.100
      libavcodec     59. 37.100 / 59. 37.100
      libavformat    59. 27.100 / 59. 27.100
      libavdevice    59.  7.100 / 59.  7.100
      libavfilter     8. 44.100 /  8. 44.100
      libswscale      6.  7.100 /  6.  7.100
      libswresample   4.  7.100 /  4.  7.100
      libpostproc    56.  6.100 / 56.  6.100
    Hyper fast Audio and Video encoder
    usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...
    
    Use -h to get full help or, even better, run 'man ffmpeg'

     

    It is a generic build that serves all devices of the architecture for which it was built.

  14. 6 hours ago, robertoj said:

    Usual user… what do you get with “vainfo” … does it matter?

    I don't even know what "vainfo" is.

     

    6 hours ago, robertoj said:

    Are there other components at play, more required Linux patches, libraries sandwiched between Linux and ffmpeg?

    I' m running on a system that is build from pure mainline not even special designed for my devices but runs to it full capacity what mainline provides.
    I have to just patch in the request-api as that is missing in mainline.

     

    1 hour ago, robertoj said:

    mpv --hwdec=drm --hwdec-codecs=all youtube720p.mp4 decodes with CPU

    In a terminal run:

    mpv -v --hwdec=drm --hwdec-codecs=all youtube720p.mp4

    Quit the playing and provide the log to see what is going on.

  15. 10 hours ago, elsabz said:

    I'm new in armbian

    Usually I use armbian barely. So I can't really say anything about the nature of hers implementation.

    10 hours ago, elsabz said:

    Where is it determined which file is to be used?

    It's probably up to the user's decision. In terms of file name, it could follow a similar scheme to how I use it. Since the use of NOR flash has an influence on the usable eMMC speed, armbian probably also offers a variant with and without its support. I also use the DTB filename to express which features I've added. E.g. one of my DTB is: meson-g12b-odroid-n2-plus-con1-opp-uart.dtb with multiple overlays applied. This way I just need to replace the original DTB with the desired one, this works in any environment regardless of an overlay framework.

  16. 15 hours ago, jernej said:

    *_v4l2m2m codecs are for v4l2 stateful drivers, not request api.

     

    ffmpeg -hide_banner -decoders | grep request

    returns for me ... NOTHING also, but hwdec is working as expected. So your statement seems not to be correct.

    From "man mpv":

    NOTE:
    
    Even if enabled, hardware decoding is still only white-listed for some codecs. See --hwdec-codecs to enable hardware decoding in more cases.
    Which method to choose?
    If you only want to enable hardware decoding at runtime, don't set the parameter, or put hwdec=no into your mpv.conf (relevant on distros which force-enable it by default, such as on Ubuntu). Use the Ctrl+h default binding to enable it at runtime.
    If you're not sure, but want hardware decoding always enabled by default, put hwdec=auto-safe into your mpv.conf, and acknowledge that this may cause problems.
    If you want to test available hardware decoding methods, pass --hwdec=auto --hwdec-codecs=all and look at the terminal output.

    As the hacked in request-api codecs are not white-listed in mpv proper by default  ==> so "--hwdec=auto --hwdec-codecs=all"

    Spoiler
    mpv --hwdec=auto --hwdec-codecs=all bbb_sunflower_1080p_60fps_normal.mp4
    
     (+) Video --vid=1 (*) (h264 1920x1080 60.000fps)
     (+) Audio --aid=1 (*) (mp3 2ch 48000Hz)
         Audio --aid=2 (*) (ac3 6ch 48000Hz)
    File tags:
     Artist: Blender Foundation 2008, Janus Bager Kristensen 2013
     Comment: Creative Commons Attribution 3.0 - http://bbb3d.renderfarming.net
     Composer: Sacha Goedegebure
     Genre: Animation
     Title: Big Buck Bunny, Sunflower version
    Cannot load libcuda.so.1
    [ffmpeg] AVHWDeviceContext: Cannot load libcuda.so.1
    [ffmpeg] AVHWDeviceContext: Could not dynamically load CUDA
    Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory
    Using hardware decoding (drm).
    AO: [pipewire] 48000Hz stereo 2ch floatp
    VO: [gpu] 1920x1080 drm_prime[nv12]
    AV: 00:00:04 / 00:10:34 (1%) A-V:  0.011 Dropped: 131
    
    Exiting... (Quit)
    
    Verbose log | grep vd]:
    
    [vd] Container reported FPS: 60.000000
    [vd] Codec list:
    [vd]     h264 - H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
    [vd]     h264_v4l2m2m (h264) - V4L2 mem2mem H.264 decoder wrapper
    [vd]     h264_cuvid (h264) - Nvidia CUVID H264 decoder
    [vd] Opening decoder h264
    [vd] Looking at hwdec h264-nvdec...
    [vd] Could not create device.
    [vd] Looking at hwdec h264-vaapi...
    [vd] Could not create device.
    [vd] Looking at hwdec h264-vdpau...
    [vd] Could not create device.
    [vd] Looking at hwdec h264-nvdec-copy...
    [vd] Could not create device.
    [vd] Looking at hwdec h264-vaapi-copy...
    [vd] Could not create device.
    [vd] Looking at hwdec h264-vdpau-copy...
    [vd] Error when calling vdp_device_create_x11: 1
    [vd] Could not create device.
    [vd] Looking at hwdec h264-drm...
    [vd] Trying hardware decoding via h264-drm.
    [vd] Selected codec: h264 (H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10)
    [vd] Pixel formats supported by decoder: vdpau cuda vaapi drm_prime yuv420p
    [vd] Codec profile: High (0x64)
    [vd] Requesting pixfmt 'drm_prime' from decoder.
    [vd] Using hardware decoding (drm).
    [vd] Decoder format: 1920x1080 drm_prime[nv12] auto/auto/auto/auto/auto CL=mpeg2/4/h264

     

     

  17. 22 hours ago, robertoj said:

    There was only 1 hunk error.

    The patch set should be applied to the source for which it was designed, even if it can be applied cleanly to another source, there is no guarantee that a change in a non-contradictory place will not affect functionality.
    The specific ffmpeg version is not decisive here, since the request-api can be patched in for quite some time, e.g. I am currently still using a 5.x for other reasons.
    So if you've cloned the ffmpeg git repository, it's just a "git checkout <commit>" away to have a corresponding version.
    I grabbed the patches from @jernej's Github at the time, as his branch names indicate which version they belong to. Of course, the 5.x ones are long gone.

  18. @Kwiboo did a fantastic job. Mainline u-boot becomes usable (console.log) for me.
    It enumerates my different systems on different storage media and can natively load the boot components from them.
    The use of compressed kernels does not work yet, it bails out like this:

    Spoiler
    ** Booting bootflow 'mmc@fe2b0000.bootdev.part_2' with distro
    Fedora-KDE-aarch64-38-20230401 Boot Options
    1:<---->ODROID-M1 fedora 50000005-01 verbose swiotlb=65535
    2:<---->ODROID-M1 fedora 50000005-01 verbose swiotlb=65535
    3:<---->ODROID-M1 fedora 50000005-01 verbose mem=4G
    4:<---->ODROID-M1 fedora 50000005-01 previous verbose
    5:<---->Fedora-KDE-aarch64-38-20230401
    6:<---->NanoPC-T4 trial-01
    7:<---->NanoPC-T4 trial-01 previous
    8:<---->NanoPC-T4 trial-01 verbose
    9:<---->Odroid-N2+ trial-01
    10:<--->Odroid-N2+ trial-01 previous
    11:<--->Odroid-N2+ SPI NOR trial-01
    Enter choice: 1:<------>ODROID-M1 fedora 50000005-01 verbose swiotlb=65535
    Retrieving file: /usr/lib/modules/linux/vmlinuz
    append: loglevel=9 root=PARTUUID=50000005-01 cma=896M coherent_pool=2M selinux=0 audit=0 console=ttyS02,1500000 console=tty0 fbcon=nodefer rootwait rootfstype=ext4 raid=noautodetect swiotlb=65535
    Retrieving file: /usr/lib/modules/linux/dtb/rockchip/rk3568-odroid-m1.dtb
       Uncompressing Kernel Image
    Moving Image from 0x2080000 to 0x2200000, end=60f0000
    ERROR: Did not find a cmdline Flattened Device Tree
    Could not find a valid device tree
    2:<---->ODROID-M1 fedora 50000005-01 verbose swiotlb=65535
    Retrieving file: /usr/lib/modules/linux/zImage
    append: loglevel=9 root=PARTUUID=50000005-01 cma=896M coherent_pool=2M selinux=0 audit=0 console=ttyS02,1500000 console=tty0 fbcon=nodefer rootwait rootfstype=ext4 raid=noautodetect swiotlb=65535
    Retrieving file: /usr/lib/modules/linux/dtb/rockchip/rk3568-odroid-m1.dtb
    Moving Image from 0x2080000 to 0x2200000, end=60f0000
    ## Flattened Device Tree blob at 0a100000
       Booting using the fdt blob at 0xa100000
    Working FDT set to a100000
    Host not halted after 16000 microseconds.
    Host not halted after 16000 microseconds.
       Loading Device Tree to 00000000eceb7000, end 00000000ecec89b3 ... OK
    Working FDT set to eceb7000
    
    Starting kernel ...

     

    The u-boot control still has to be done with the serial console, as HDMI support is still missing, but it is still an early stage and the development is vivid.

     

    In the meantime, I have solved the problem with loading compressed kernels for me. Also, I can now use a USB keyboard (usbkbd) for stdin. Both require a correction of the mainline code, but since unfortunately too many in-flight patches have not yet landed, it is still too early to go into detail here. If you already want to play with the functionality that will be provided by the mainline u-boot in the future out-of-the-box, I have uploaded my firmware build here. The firmware can be put in place by:

    dd bs=512 seek=64 conv=notrunc,fsync if=u-boot-rockchip.bin of=/dev/${entire-device-to-be-used}

    into the firmware area of an SD card or an eMMC. If it is started during power-up by holding the SPI recovery button (RCY), it will scan all connected storage media for a usable bootflow.

    Supported boot methodes are: extlinux (extlinux.conf), efiboot and legacy-boot (boot.scr).
    As soon as video support is available, "U-Boot Standard Boot" will be fully usable, but VOB2 support is still open.

  19. On 4/19/2023 at 5:51 AM, robertoj said:

    ERROR debug information: ../sys/v4l2codecs/gstv4l2codec.c(134): gst_v4l2_codec_h264_dec_open () ...

     

    On 4/19/2023 at 5:51 AM, robertoj said:

    ../subprojects/FFmpeg/libavutil/arm/float_dsp_neon.S:268: Error: selected processor does not support `vpadd.f32 d0,d0,d0' in Thumb mode

    Sorry, I don't know at what "mainline" gstreamer you are. I can't see a line 134 in gstv4l2codec.c nor has gstreamer a subprojects/FFmpeg.

  20. 14 hours ago, robertoj said:

    Indeed, my /dev/video0 exists, but it failed the test

    That report doesn't look that bad.

    14 hours ago, robertoj said:

    At this point, I have zero clues about what I could do next.

    If the decoder is exported correctly, i.e. it is correctly described in the DTB and the driver has been loaded automatically, you only need a corresponding video framework that uses it. Depending on the application you plan to use, this is either ffmpeg or gstreamer. For ffmpeg you need one with a properly patched in request-api. For gstreamer, it is sufficient to use a current mainline release with these built elements:

     gst-inspect-1.0 | grep v4l2
     v4l2codecs:  v4l2slh264dec: V4L2 Stateless H.264 Video Decoder
     v4l2codecs:  v4l2slmpeg2dec: V4L2 Stateless Mpeg2 Video Decoder
     v4l2codecs:  v4l2slvp8alphadecodebin: VP8 Alpha Decoder
     v4l2codecs:  v4l2slvp8dec: V4L2 Stateless VP8 Video Decoder

     

    gst-play-1.0 --use-playbin3 bbb_sunflower_1080p_60fps_normal.mp4

    will assemble a video pipeline that uses the hardware decoder out-of-the-box.

     

    mpv --hwdec=auto --hwdec-codecs=all bbb_sunflower_1080p_60fps_normal.mp4

    will use it via "hwdec: drm" if a proper patched ffmpeg framework is available.
    See mpv.log for reference.

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines