

usual user
Members-
Posts
468 -
Joined
-
Last visited
Content Type
Forums
Store
Crowdfunding
Applications
Events
Raffles
Community Map
Everything posted by usual user
-
With regard to the hardware decoder, this will not provide any new insights. For the video decoder it is only relevant that the subsequent display pipeline can process the decoder output format. Be it GBM, Wayland, Xwindow or /dev/null. See here for comparison a gstreamer video pipeline (gst-play-1.0-pipeline.pdf). It is always identical, regardless of which display pipeline (GstXvImageSink) ultimately comes to fruition. The decoder component (v4l2slh264dec) is backed by the respective hardware and is unmodified interchangeable as long as v4l2-compliance shows sufficient driver conformity. No, the ultimate goal is to compose a video pipeline that is backed by a hardware decoder and to have a display pipeline that can display the format provided by the video pipeline in a hardware-accelerated manner. If I keep seeing the long dependency lists that are given to install when it comes to building a software, I would guess that it does not exist. But since this is about the kernel package, I don't see the problem here because it is self-contained and does not require any external dependencies. I see here rather the problem of using newly built kernels comfortably parallel in Armbian. This is due to the boot method and the way Armbian installs its kernel (a single kernel, not multiple kernels in parallel). But this can be changed, as I have already demonstrated several times in other forum threads. This works because the subsequent display pipeline (dev/null) can handle any format. The display driver is only one part of the display pipeline, but probably the component that does not yet provide all the necessary prerequisites. This will certainly be added in LE with their kernel patches so that they can use a hardware-accelerate display pipeline. A display pipeline has no business with video decoding, we're not in x86 architecture, it deals only with already decoded content.
-
Sorry, but I don't quite understand what you're asking for here. When it comes to improving the display subsystem, I would start by rebuilding my kernel package. Since you have confirmed that LibreElec works, my first approach would be to apply all relevant LE patches. But I understand that it is apparently not so easy to implement with your build system.
-
https://docs.mesa3d.org/drivers/lima.html
-
Media support for the rk3399 SOC is working since a long time with pure mainline code. With a modern software stack and current mainline releases, this even works out-of-the-box. Only the HEVC decoder support in the kernel has not yet landed, but there are WIP patches that make this usable also. See gst-play-1.0-pipeline.pdf for reference.
-
Hmm, when I take another look at my previously posted ffmpeg.spec addition, I see that I also carry a patchset with the name like that from a branch of @jernej github. Furthermore, I see that I also apply other patches from the v4l2-drmprime-n5.1.2 branch. Do you have similar patches in your portfolio?
-
The download link was only included because the file was already stored for another purpose and was intended to demonstrate a real-world example. The file could possibly be used for a quick test, but is not suitable for permanent integration into Armbian as is. I usually upload files that should remain available for Armbian in the long term in the forum. I always have my doubts about binary files, because their validity is not easy to verify, so I think the warning on the download page is just right. Since the file can only be found via the reference link, it is just as trustworthy as if it were deposited directly in the forum and the user should be aware of what he is getting.
-
If everything is based on mainline code, it is only a matter of reverse engineering the build configuration to replicate it in your own build system. However, you should check if LE is still applying patches to the drm/kms display subsystem in the kernel, as I don't know what the out-of-the-box support is for your device in this regard. This could also be a reason why the VPU cannot be used. And that's not something where userspace code is involved.
-
Wow, great way to get access to all user data. Hope that there were no confidential data or passwords, because their integrity is no longer guaranteed. And if the NVME is already "read only", the user wouldn't even be able to remove them beforehand.
-
I don't know the inner structure of ffmpeg and I don't know what actions trigger corresponding compiler switches, but since the request-api is only an extension in v4l2, I would guess that the base v4l2 support is also needed. Does this system run on current mainline software or does it use legacy code?
-
Maybe the firmware gets uploaded to LVFS then it can be installed in different ways out-of-the-box.
-
I don't know much about the ffmpeg framework and even less about the interpretation of its protocols, but IMHO the display subsystem ([vo/gpu/opengl/kms] Selected mode: PAL (720x576@50.00Hz)) does not support the necessary prerequisites to serve the hardware decoder output. That is the reason why the VPU is not selected. I don't know if this is a hardware limitation, or just lack of display driver support for your device. As @jernej has confirmed, drm_prime is necessary for VPU usage and this is not mentioned in your logs. Xwindow is also only based on drm/kms and therefore cannot bring any improvement. Even if the display subsystem provides all the prerequisites, it is not a good choice, as it does not work efficiently with VPUs by design. Here, a desktop environment based on a Wayland backend delivers better results. Neither, I haven't even built my own system. Debian based systems are IMHO way too stable - outdated. I prefer a system that follows mainline releases more closely. I'm using an RPM-based system and I just need to recompile the ffmpeg source package that has been supplemented with my extensions. All I have to do is to install the ffmpeg-5.1.2-3.fc37.src.rpm, copy the patches into the ~/rpmbuild/SOURCES directory and make these changes to the ~/rpmbuild/SPECS/ffmpeg.spec file: After that, a "dnf buildep ~/rpmbuild/SPECS/ffmpeg.spec" installs all the necessary dependencies and I can leave everything else to my build system with "rpmbuild -ba ~/rpmbuild/SPECS/ffmpeg.spec". Because this procedure can be applied to any RPM package, I can rebuild any package in my distribution to make changes if necessary. Since only a small proportion of packages are device-dependent, it is absolutely not necessary to compile a distribution completely by myself. I have to build only the kernel and firmware (uboot-tools) packages. In addition, there are mesa, gstreamer1 and ffmpeg to be up to date, but these are not really device-specific. I don't have to build mesa and gstreamer1 myself anymore because current releases already contain everything I need. So I have a generic system that works for all devices of an architecture, as long as there is sufficient mainline support in kernel and firmware. I use this system for my CuBox, my HummingBord, my NanoPC-T4, my Odroid-N2+ and my Odroid-M1. All devices of the same architecture can use the same SD card. And since I can build firmware for Odroid-C4, Odroid-HC4 and ROCK PI 4B, whose functionality is confirmed, my SD card will also work there in the same way. In order to be able to support other devices, only a suitable firmware and a corresponding DTB is necessary. As soon as the rkvdec2 driver support for an rk35xx SOC becomes available in the kernel, my Odroid M1 will also work with it without further intervention, since all userspace programs are already available and already use the other decoder IPs today. They just need to select a different dev/videoX. This is the outcome of my build: It is a generic build that serves all devices of the architecture for which it was built.
-
I don't even know what "vainfo" is. I' m running on a system that is build from pure mainline not even special designed for my devices but runs to it full capacity what mainline provides. I have to just patch in the request-api as that is missing in mainline. In a terminal run: mpv -v --hwdec=drm --hwdec-codecs=all youtube720p.mp4 Quit the playing and provide the log to see what is going on.
-
Usually I use armbian barely. So I can't really say anything about the nature of hers implementation. It's probably up to the user's decision. In terms of file name, it could follow a similar scheme to how I use it. Since the use of NOR flash has an influence on the usable eMMC speed, armbian probably also offers a variant with and without its support. I also use the DTB filename to express which features I've added. E.g. one of my DTB is: meson-g12b-odroid-n2-plus-con1-opp-uart.dtb with multiple overlays applied. This way I just need to replace the original DTB with the desired one, this works in any environment regardless of an overlay framework.
-
ffmpeg -hide_banner -decoders | grep request returns for me ... NOTHING also, but hwdec is working as expected. So your statement seems not to be correct. From "man mpv": NOTE: Even if enabled, hardware decoding is still only white-listed for some codecs. See --hwdec-codecs to enable hardware decoding in more cases. Which method to choose? If you only want to enable hardware decoding at runtime, don't set the parameter, or put hwdec=no into your mpv.conf (relevant on distros which force-enable it by default, such as on Ubuntu). Use the Ctrl+h default binding to enable it at runtime. If you're not sure, but want hardware decoding always enabled by default, put hwdec=auto-safe into your mpv.conf, and acknowledge that this may cause problems. If you want to test available hardware decoding methods, pass --hwdec=auto --hwdec-codecs=all and look at the terminal output. As the hacked in request-api codecs are not white-listed in mpv proper by default ==> so "--hwdec=auto --hwdec-codecs=all"
-
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/serial/amlogic,meson-uart.yaml
-
The patch set should be applied to the source for which it was designed, even if it can be applied cleanly to another source, there is no guarantee that a change in a non-contradictory place will not affect functionality. The specific ffmpeg version is not decisive here, since the request-api can be patched in for quite some time, e.g. I am currently still using a 5.x for other reasons. So if you've cloned the ffmpeg git repository, it's just a "git checkout <commit>" away to have a corresponding version. I grabbed the patches from @jernej's Github at the time, as his branch names indicate which version they belong to. Of course, the 5.x ones are long gone.
-
@Kwiboo did a fantastic job. Mainline u-boot becomes usable (console.log) for me. It enumerates my different systems on different storage media and can natively load the boot components from them. The use of compressed kernels does not work yet, it bails out like this: The u-boot control still has to be done with the serial console, as HDMI support is still missing, but it is still an early stage and the development is vivid. In the meantime, I have solved the problem with loading compressed kernels for me. Also, I can now use a USB keyboard (usbkbd) for stdin. Both require a correction of the mainline code, but since unfortunately too many in-flight patches have not yet landed, it is still too early to go into detail here. If you already want to play with the functionality that will be provided by the mainline u-boot in the future out-of-the-box, I have uploaded my firmware build here. The firmware can be put in place by: dd bs=512 seek=64 conv=notrunc,fsync if=u-boot-rockchip.bin of=/dev/${entire-device-to-be-used} into the firmware area of an SD card or an eMMC. If it is started during power-up by holding the SPI recovery button (RCY), it will scan all connected storage media for a usable bootflow. Supported boot methodes are: extlinux (extlinux.conf), efiboot and legacy-boot (boot.scr). As soon as video support is available, "U-Boot Standard Boot" will be fully usable, but VOB2 support is still open.
-
Sorry, I don't know at what "mainline" gstreamer you are. I can't see a line 134 in gstv4l2codec.c nor has gstreamer a subprojects/FFmpeg.
-
That report doesn't look that bad. If the decoder is exported correctly, i.e. it is correctly described in the DTB and the driver has been loaded automatically, you only need a corresponding video framework that uses it. Depending on the application you plan to use, this is either ffmpeg or gstreamer. For ffmpeg you need one with a properly patched in request-api. For gstreamer, it is sufficient to use a current mainline release with these built elements: gst-inspect-1.0 | grep v4l2 v4l2codecs: v4l2slh264dec: V4L2 Stateless H.264 Video Decoder v4l2codecs: v4l2slmpeg2dec: V4L2 Stateless Mpeg2 Video Decoder v4l2codecs: v4l2slvp8alphadecodebin: VP8 Alpha Decoder v4l2codecs: v4l2slvp8dec: V4L2 Stateless VP8 Video Decoder gst-play-1.0 --use-playbin3 bbb_sunflower_1080p_60fps_normal.mp4 will assemble a video pipeline that uses the hardware decoder out-of-the-box. mpv --hwdec=auto --hwdec-codecs=all bbb_sunflower_1080p_60fps_normal.mp4 will use it via "hwdec: drm" if a proper patched ffmpeg framework is available. See mpv.log for reference.
-
To check if the kernel exposes a suitable decoder run a script like this: #!/bin/bash for I in {0..12} ; do printf "################################################################################\n\n" v4l2-compliance --device=/dev/video${I} EXITSTATUS="${?}" [ "${EXITSTATUS}" == "0" ] || printf "Compliance test for device /dev/video${I} failed, exitstatus: ${EXITSTATUS}\n" printf "\n" done You need v4l2-compliance from the v4l-utils-tools.
-
When the request-api is patched in properly, you are looking for: ffmpeg -hide_banner -decoders | grep v4l2 V..... h263_v4l2m2m V4L2 mem2mem H.263 decoder wrapper (codec h263) V..... h264_v4l2m2m V4L2 mem2mem H.264 decoder wrapper (codec h264) V..... hevc_v4l2m2m V4L2 mem2mem HEVC decoder wrapper (codec hevc) V..... mpeg1_v4l2m2m V4L2 mem2mem MPEG1 decoder wrapper (codec mpeg1video) V..... mpeg2_v4l2m2m V4L2 mem2mem MPEG2 decoder wrapper (codec mpeg2video) V..... mpeg4_v4l2m2m V4L2 mem2mem MPEG4 decoder wrapper (codec mpeg4) V..... vc1_v4l2m2m V4L2 mem2mem VC1 decoder wrapper (codec vc1) V..... vp8_v4l2m2m V4L2 mem2mem VP8 decoder wrapper (codec vp8) V..... vp9_v4l2m2m V4L2 mem2mem VP9 decoder wrapper (codec vp9) ffmpeg -hide_banner -hwaccels Hardware acceleration methods: vdpau cuda vaapi drm opencl vulkan And mpv (shift i) will use it via the drm hwaccel: Either your used gstreamer version is too old to have out-of-the-box support at all, or the build of a current mainline version has not activated the necessary components.
-
While exploring my Info Center, I stumbled across this screen: If there are other users who use a Samsung SSD 980 PRO 2TB, they may also be interested in following this link: https://www.pugetsystems.com/support/guides/critical-samsung-ssd-firmware-update/
-
They are floating around. Some of them are in flight to land in mainline, some are WIP in heavy flux. In terms of functionality, mainline would not be able to do more than legacy so far. And since your offered legacy firmware works so far, there is currently no reason to do the work now to use the current mainline version. It is only interesting for someone who wants to add e.g. VOP2 or NVME driver support and ports the drivers from Linux kernel. For all pure users it is easier to wait until all outstanding components have landed and then just use mainline. Anyway, my SPEC file for the uboot-tools package is now ready to build M1 firmware as soon as needed. However, this will certainly take until 2023.07 at the earliest, because there are still some components under discussion and 2023.04 should be available next week.
-
Since @Igor don't like my source of knowledge, I'll leave it to him to answer such Armbian user queries now, because in his opinion sharing knowledge and having mainline support doesn't seem to be of any help to Armbian 😉