

usual user
Members-
Posts
485 -
Joined
-
Last visited
Content Type
Forums
Store
Crowdfunding
Applications
Events
Raffles
Community Map
Everything posted by usual user
-
I don't know the inner structure of ffmpeg and I don't know what actions trigger corresponding compiler switches, but since the request-api is only an extension in v4l2, I would guess that the base v4l2 support is also needed. Does this system run on current mainline software or does it use legacy code?
-
Maybe the firmware gets uploaded to LVFS then it can be installed in different ways out-of-the-box.
-
I don't know much about the ffmpeg framework and even less about the interpretation of its protocols, but IMHO the display subsystem ([vo/gpu/opengl/kms] Selected mode: PAL (720x576@50.00Hz)) does not support the necessary prerequisites to serve the hardware decoder output. That is the reason why the VPU is not selected. I don't know if this is a hardware limitation, or just lack of display driver support for your device. As @jernej has confirmed, drm_prime is necessary for VPU usage and this is not mentioned in your logs. Xwindow is also only based on drm/kms and therefore cannot bring any improvement. Even if the display subsystem provides all the prerequisites, it is not a good choice, as it does not work efficiently with VPUs by design. Here, a desktop environment based on a Wayland backend delivers better results. Neither, I haven't even built my own system. Debian based systems are IMHO way too stable - outdated. I prefer a system that follows mainline releases more closely. I'm using an RPM-based system and I just need to recompile the ffmpeg source package that has been supplemented with my extensions. All I have to do is to install the ffmpeg-5.1.2-3.fc37.src.rpm, copy the patches into the ~/rpmbuild/SOURCES directory and make these changes to the ~/rpmbuild/SPECS/ffmpeg.spec file: After that, a "dnf buildep ~/rpmbuild/SPECS/ffmpeg.spec" installs all the necessary dependencies and I can leave everything else to my build system with "rpmbuild -ba ~/rpmbuild/SPECS/ffmpeg.spec". Because this procedure can be applied to any RPM package, I can rebuild any package in my distribution to make changes if necessary. Since only a small proportion of packages are device-dependent, it is absolutely not necessary to compile a distribution completely by myself. I have to build only the kernel and firmware (uboot-tools) packages. In addition, there are mesa, gstreamer1 and ffmpeg to be up to date, but these are not really device-specific. I don't have to build mesa and gstreamer1 myself anymore because current releases already contain everything I need. So I have a generic system that works for all devices of an architecture, as long as there is sufficient mainline support in kernel and firmware. I use this system for my CuBox, my HummingBord, my NanoPC-T4, my Odroid-N2+ and my Odroid-M1. All devices of the same architecture can use the same SD card. And since I can build firmware for Odroid-C4, Odroid-HC4 and ROCK PI 4B, whose functionality is confirmed, my SD card will also work there in the same way. In order to be able to support other devices, only a suitable firmware and a corresponding DTB is necessary. As soon as the rkvdec2 driver support for an rk35xx SOC becomes available in the kernel, my Odroid M1 will also work with it without further intervention, since all userspace programs are already available and already use the other decoder IPs today. They just need to select a different dev/videoX. This is the outcome of my build: It is a generic build that serves all devices of the architecture for which it was built.
-
I don't even know what "vainfo" is. I' m running on a system that is build from pure mainline not even special designed for my devices but runs to it full capacity what mainline provides. I have to just patch in the request-api as that is missing in mainline. In a terminal run: mpv -v --hwdec=drm --hwdec-codecs=all youtube720p.mp4 Quit the playing and provide the log to see what is going on.
-
Usually I use armbian barely. So I can't really say anything about the nature of hers implementation. It's probably up to the user's decision. In terms of file name, it could follow a similar scheme to how I use it. Since the use of NOR flash has an influence on the usable eMMC speed, armbian probably also offers a variant with and without its support. I also use the DTB filename to express which features I've added. E.g. one of my DTB is: meson-g12b-odroid-n2-plus-con1-opp-uart.dtb with multiple overlays applied. This way I just need to replace the original DTB with the desired one, this works in any environment regardless of an overlay framework.
-
ffmpeg -hide_banner -decoders | grep request returns for me ... NOTHING also, but hwdec is working as expected. So your statement seems not to be correct. From "man mpv": NOTE: Even if enabled, hardware decoding is still only white-listed for some codecs. See --hwdec-codecs to enable hardware decoding in more cases. Which method to choose? If you only want to enable hardware decoding at runtime, don't set the parameter, or put hwdec=no into your mpv.conf (relevant on distros which force-enable it by default, such as on Ubuntu). Use the Ctrl+h default binding to enable it at runtime. If you're not sure, but want hardware decoding always enabled by default, put hwdec=auto-safe into your mpv.conf, and acknowledge that this may cause problems. If you want to test available hardware decoding methods, pass --hwdec=auto --hwdec-codecs=all and look at the terminal output. As the hacked in request-api codecs are not white-listed in mpv proper by default ==> so "--hwdec=auto --hwdec-codecs=all"
-
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/serial/amlogic,meson-uart.yaml
-
The patch set should be applied to the source for which it was designed, even if it can be applied cleanly to another source, there is no guarantee that a change in a non-contradictory place will not affect functionality. The specific ffmpeg version is not decisive here, since the request-api can be patched in for quite some time, e.g. I am currently still using a 5.x for other reasons. So if you've cloned the ffmpeg git repository, it's just a "git checkout <commit>" away to have a corresponding version. I grabbed the patches from @jernej's Github at the time, as his branch names indicate which version they belong to. Of course, the 5.x ones are long gone.
-
@Kwiboo did a fantastic job. Mainline u-boot becomes usable (console.log) for me. It enumerates my different systems on different storage media and can natively load the boot components from them. The use of compressed kernels does not work yet, it bails out like this: The u-boot control still has to be done with the serial console, as HDMI support is still missing, but it is still an early stage and the development is vivid. In the meantime, I have solved the problem with loading compressed kernels for me. Also, I can now use a USB keyboard (usbkbd) for stdin. Both require a correction of the mainline code, but since unfortunately too many in-flight patches have not yet landed, it is still too early to go into detail here. If you already want to play with the functionality that will be provided by the mainline u-boot in the future out-of-the-box, I have uploaded my firmware build here. The firmware can be put in place by: dd bs=512 seek=64 conv=notrunc,fsync if=u-boot-rockchip.bin of=/dev/${entire-device-to-be-used} into the firmware area of an SD card or an eMMC. If it is started during power-up by holding the SPI recovery button (RCY), it will scan all connected storage media for a usable bootflow. Supported boot methodes are: extlinux (extlinux.conf), efiboot and legacy-boot (boot.scr). As soon as video support is available, "U-Boot Standard Boot" will be fully usable, but VOB2 support is still open.
-
Sorry, I don't know at what "mainline" gstreamer you are. I can't see a line 134 in gstv4l2codec.c nor has gstreamer a subprojects/FFmpeg.
-
That report doesn't look that bad. If the decoder is exported correctly, i.e. it is correctly described in the DTB and the driver has been loaded automatically, you only need a corresponding video framework that uses it. Depending on the application you plan to use, this is either ffmpeg or gstreamer. For ffmpeg you need one with a properly patched in request-api. For gstreamer, it is sufficient to use a current mainline release with these built elements: gst-inspect-1.0 | grep v4l2 v4l2codecs: v4l2slh264dec: V4L2 Stateless H.264 Video Decoder v4l2codecs: v4l2slmpeg2dec: V4L2 Stateless Mpeg2 Video Decoder v4l2codecs: v4l2slvp8alphadecodebin: VP8 Alpha Decoder v4l2codecs: v4l2slvp8dec: V4L2 Stateless VP8 Video Decoder gst-play-1.0 --use-playbin3 bbb_sunflower_1080p_60fps_normal.mp4 will assemble a video pipeline that uses the hardware decoder out-of-the-box. mpv --hwdec=auto --hwdec-codecs=all bbb_sunflower_1080p_60fps_normal.mp4 will use it via "hwdec: drm" if a proper patched ffmpeg framework is available. See mpv.log for reference.
-
To check if the kernel exposes a suitable decoder run a script like this: #!/bin/bash for I in {0..12} ; do printf "################################################################################\n\n" v4l2-compliance --device=/dev/video${I} EXITSTATUS="${?}" [ "${EXITSTATUS}" == "0" ] || printf "Compliance test for device /dev/video${I} failed, exitstatus: ${EXITSTATUS}\n" printf "\n" done You need v4l2-compliance from the v4l-utils-tools.
-
When the request-api is patched in properly, you are looking for: ffmpeg -hide_banner -decoders | grep v4l2 V..... h263_v4l2m2m V4L2 mem2mem H.263 decoder wrapper (codec h263) V..... h264_v4l2m2m V4L2 mem2mem H.264 decoder wrapper (codec h264) V..... hevc_v4l2m2m V4L2 mem2mem HEVC decoder wrapper (codec hevc) V..... mpeg1_v4l2m2m V4L2 mem2mem MPEG1 decoder wrapper (codec mpeg1video) V..... mpeg2_v4l2m2m V4L2 mem2mem MPEG2 decoder wrapper (codec mpeg2video) V..... mpeg4_v4l2m2m V4L2 mem2mem MPEG4 decoder wrapper (codec mpeg4) V..... vc1_v4l2m2m V4L2 mem2mem VC1 decoder wrapper (codec vc1) V..... vp8_v4l2m2m V4L2 mem2mem VP8 decoder wrapper (codec vp8) V..... vp9_v4l2m2m V4L2 mem2mem VP9 decoder wrapper (codec vp9) ffmpeg -hide_banner -hwaccels Hardware acceleration methods: vdpau cuda vaapi drm opencl vulkan And mpv (shift i) will use it via the drm hwaccel: Either your used gstreamer version is too old to have out-of-the-box support at all, or the build of a current mainline version has not activated the necessary components.
-
While exploring my Info Center, I stumbled across this screen: If there are other users who use a Samsung SSD 980 PRO 2TB, they may also be interested in following this link: https://www.pugetsystems.com/support/guides/critical-samsung-ssd-firmware-update/
-
They are floating around. Some of them are in flight to land in mainline, some are WIP in heavy flux. In terms of functionality, mainline would not be able to do more than legacy so far. And since your offered legacy firmware works so far, there is currently no reason to do the work now to use the current mainline version. It is only interesting for someone who wants to add e.g. VOP2 or NVME driver support and ports the drivers from Linux kernel. For all pure users it is easier to wait until all outstanding components have landed and then just use mainline. Anyway, my SPEC file for the uboot-tools package is now ready to build M1 firmware as soon as needed. However, this will certainly take until 2023.07 at the earliest, because there are still some components under discussion and 2023.04 should be available next week.
-
Since @Igor don't like my source of knowledge, I'll leave it to him to answer such Armbian user queries now, because in his opinion sharing knowledge and having mainline support doesn't seem to be of any help to Armbian 😉
-
My first u-boot build: