

usual user
Members-
Posts
476 -
Joined
-
Last visited
Content Type
Forums
Store
Crowdfunding
Applications
Events
Raffles
Community Map
Everything posted by usual user
-
To use mainline kernel hardware video acceleration in applications it requires basic support for V4L2-M2M hardware-accelerated video decode in the first place. E.g. firefox has just landed initial support for the h.264 decoder. The next showstopper would be the lack of V4L2-M2M support in the ffmpeg framework. There are some hack patches in the wild, but no out-of-the-box solution. The next showstopper would be the lack of kernel support for the SOC video decoder IP. The lack of support in the application is a general problem for all devices that provide their decoders via V4L2-M2M. The lack of support in the ffmpeg framework is a problem for all applications that base on it. Gstreamer-based applications, on the other hand, offer out-of-the-box solutions, as the necessary support is already integrated into the mainline gstreamer framework. When it comes to choosing a desktop environment, Xwindow is a bad choice. It was developed for x86 architecture and therefore cannot deal efficiently with V4L2-M2M requirements due to its design. Their developers at that time also recognized this and therefore developed Wayland from scratch. So when it comes to efficiently using a video pipeline in combination with a display pipeline, a desktop environment with Wayland backend is the first choice.
-
Since I haven't gotten any feedback on my build so far, I don't see much sense in it. In my experience, Armbian users tend to stick to legacy methods and my extensions only affect the upcomming U-Boot Standard Boot. In addition, the adoption of reviewed-by patches from the patchwork to the mainline tree seems to have stalled. So it may take some time until the general support is available out-of-the-box. And these are valuable improvements to the Rockchip platform, which also benefit ODROID-M1 support. Since Armbian already has a working boot method, I don't see any reason for an underpaid Armbian developer to invest his valuable time in an unfinished solution that anyone can have for free when everything finally lands. For me, it's different story because my preferred distro requires different prerequisites. I had made my build available so that others could save themselves the trouble of self-building, but there doesn't seem to be any particular interest.
-
Not a missing package, it is the lack of basic support for V4L2-M2M hardware-accelerated video decode in the application in the first place. The next showstopper would be the lack of V4L2-M2M support in the ffmpeg framework. There are some hack patches in the wild, but no out-of-the-box solution. The next showstopper would be the lack of kernel support for the SOC video decoder IP. The lack of support in the application is a general problem for all devices that provide their decoders via V4L2-M2M. The lack of support in the ffmpeg framework is a problem for all applications that base on it. Gstreamer-based applications, on the other hand, offer out-of-the-box solutions, as the necessary support is already integrated into the mainline gstreamer framework.
-
dpkg -l | grep libgbm bash: dpkg: command not found... I'm currently running mesa 23.0.3 and since libgbm is an integral part of mesa, the version is of course identical. However, the current version is not really important, as the necessary API support is already very mature. It is only important that it is built with the current headers of its BuildRequires to represent the status quo. Because the API between gbm and mpv does not change, but possibly by applying kernel patches between kernel and gbm (sun4i-drm_dri), it is probably more appropriate to rebuild mesa with the updated kernel headers. BTW, to get retroarch I would do: dnf install retroarch
-
IMHO, you're using a software stack that's way too outdated. You are missing features and improvements that have already landed. I'm not an expert in analyzing compliance logs, but if you compare your log to the one provided by @robertoj above, you'll see that your kernel at least lacks features that its provides: Format ioctls: @robertoj test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK @mrfusion fail: v4l2-test-formats.cpp(263): fmtdesc.description mismatch: was 'Sunxi Tiled NV12 Format', expected 'Y/CbCr 4:2:0 (32x32 Linear)' @mrfusion test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: FAIL @robertoj test VIDIOC_G_FMT: OK @robertoj test VIDIOC_TRY_FMT: OK @robertoj test VIDIOC_S_FMT: OK @mrfusion fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT @mrfusion test VIDIOC_G_FMT: FAIL @mrfusion fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT @mrfusion test VIDIOC_TRY_FMT: FAIL @mrfusion fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT @mrfusion test VIDIOC_S_FMT: FAIL
-
How does the NanoPC-T4 Armbian system use the onboard PWM fan interface?
usual user replied to laning's topic in Rockchip
Mainline DTB has proper thermal zone configuration. The kernel can manage its temperature management on its own, there is no need for an error-prone userspace component to be involved. As long as the kernel binary has build all the necessary drivers, it works out-of-the-box. -
You want a working GBM as the first step. An application can use the APIs provided by the kernel via GBM. It becomes the master user of the drm/KMS, i.e. if e.g. Kodi uses the display subsystem via GBM, it is the only user who has access to it. Other graphics applications will not be able to use the display subsystem at the same time. A desktop environment framework accesses the display subsystem using the same kernel APIs as GBM, and therefore becomes the master of drm/KMS. However, it manages the display subsystem resources so that multiple applications can use them simultaneously. For example, Kodi can be displayed in one window and a web browser in another window at the same time. When it comes to choosing a desktop environment, Xwindow is a bad choice. It was developed for x86 architecture and therefore cannot deal efficiently with these new requirements due to its design. Their developers at that time also recognized this and therefore developed Wayland from scratch. So when it comes to efficiently using a video pipeline in combination with a display pipeline, a desktop environment with Wayland backend is the first choice. But the good thing is, nowadays the majority support in current mainline is already available out-of-the-box. So in order to support a particular device, it is only necessary that a mainline kernel with the necessary drivers is available and everything else is just generic up-to-date mainline software. For your device you have confirmed that LE works with a mainline kernel, so all necessary code should be available, only it is not yet integrated out-of-the-box into the mainline kernel.
-
Orange Pi Zero (H2+/H3) TV Out on Mainline, WORKING
usual user replied to Strontium's topic in Allwinner sunxi
https://github.com/robertojguerra/orangepi-zero-full-setup/blob/main/README2.md -
With regard to the hardware decoder, this will not provide any new insights. For the video decoder it is only relevant that the subsequent display pipeline can process the decoder output format. Be it GBM, Wayland, Xwindow or /dev/null. See here for comparison a gstreamer video pipeline (gst-play-1.0-pipeline.pdf). It is always identical, regardless of which display pipeline (GstXvImageSink) ultimately comes to fruition. The decoder component (v4l2slh264dec) is backed by the respective hardware and is unmodified interchangeable as long as v4l2-compliance shows sufficient driver conformity. No, the ultimate goal is to compose a video pipeline that is backed by a hardware decoder and to have a display pipeline that can display the format provided by the video pipeline in a hardware-accelerated manner. If I keep seeing the long dependency lists that are given to install when it comes to building a software, I would guess that it does not exist. But since this is about the kernel package, I don't see the problem here because it is self-contained and does not require any external dependencies. I see here rather the problem of using newly built kernels comfortably parallel in Armbian. This is due to the boot method and the way Armbian installs its kernel (a single kernel, not multiple kernels in parallel). But this can be changed, as I have already demonstrated several times in other forum threads. This works because the subsequent display pipeline (dev/null) can handle any format. The display driver is only one part of the display pipeline, but probably the component that does not yet provide all the necessary prerequisites. This will certainly be added in LE with their kernel patches so that they can use a hardware-accelerate display pipeline. A display pipeline has no business with video decoding, we're not in x86 architecture, it deals only with already decoded content.
-
Sorry, but I don't quite understand what you're asking for here. When it comes to improving the display subsystem, I would start by rebuilding my kernel package. Since you have confirmed that LibreElec works, my first approach would be to apply all relevant LE patches. But I understand that it is apparently not so easy to implement with your build system.
-
https://docs.mesa3d.org/drivers/lima.html
-
Media support for the rk3399 SOC is working since a long time with pure mainline code. With a modern software stack and current mainline releases, this even works out-of-the-box. Only the HEVC decoder support in the kernel has not yet landed, but there are WIP patches that make this usable also. See gst-play-1.0-pipeline.pdf for reference.
-
Hmm, when I take another look at my previously posted ffmpeg.spec addition, I see that I also carry a patchset with the name like that from a branch of @jernej github. Furthermore, I see that I also apply other patches from the v4l2-drmprime-n5.1.2 branch. Do you have similar patches in your portfolio?
-
The download link was only included because the file was already stored for another purpose and was intended to demonstrate a real-world example. The file could possibly be used for a quick test, but is not suitable for permanent integration into Armbian as is. I usually upload files that should remain available for Armbian in the long term in the forum. I always have my doubts about binary files, because their validity is not easy to verify, so I think the warning on the download page is just right. Since the file can only be found via the reference link, it is just as trustworthy as if it were deposited directly in the forum and the user should be aware of what he is getting.
-
If everything is based on mainline code, it is only a matter of reverse engineering the build configuration to replicate it in your own build system. However, you should check if LE is still applying patches to the drm/kms display subsystem in the kernel, as I don't know what the out-of-the-box support is for your device in this regard. This could also be a reason why the VPU cannot be used. And that's not something where userspace code is involved.
-
Wow, great way to get access to all user data. Hope that there were no confidential data or passwords, because their integrity is no longer guaranteed. And if the NVME is already "read only", the user wouldn't even be able to remove them beforehand.
-
I don't know the inner structure of ffmpeg and I don't know what actions trigger corresponding compiler switches, but since the request-api is only an extension in v4l2, I would guess that the base v4l2 support is also needed. Does this system run on current mainline software or does it use legacy code?
-
Maybe the firmware gets uploaded to LVFS then it can be installed in different ways out-of-the-box.
-
I don't know much about the ffmpeg framework and even less about the interpretation of its protocols, but IMHO the display subsystem ([vo/gpu/opengl/kms] Selected mode: PAL (720x576@50.00Hz)) does not support the necessary prerequisites to serve the hardware decoder output. That is the reason why the VPU is not selected. I don't know if this is a hardware limitation, or just lack of display driver support for your device. As @jernej has confirmed, drm_prime is necessary for VPU usage and this is not mentioned in your logs. Xwindow is also only based on drm/kms and therefore cannot bring any improvement. Even if the display subsystem provides all the prerequisites, it is not a good choice, as it does not work efficiently with VPUs by design. Here, a desktop environment based on a Wayland backend delivers better results. Neither, I haven't even built my own system. Debian based systems are IMHO way too stable - outdated. I prefer a system that follows mainline releases more closely. I'm using an RPM-based system and I just need to recompile the ffmpeg source package that has been supplemented with my extensions. All I have to do is to install the ffmpeg-5.1.2-3.fc37.src.rpm, copy the patches into the ~/rpmbuild/SOURCES directory and make these changes to the ~/rpmbuild/SPECS/ffmpeg.spec file: After that, a "dnf buildep ~/rpmbuild/SPECS/ffmpeg.spec" installs all the necessary dependencies and I can leave everything else to my build system with "rpmbuild -ba ~/rpmbuild/SPECS/ffmpeg.spec". Because this procedure can be applied to any RPM package, I can rebuild any package in my distribution to make changes if necessary. Since only a small proportion of packages are device-dependent, it is absolutely not necessary to compile a distribution completely by myself. I have to build only the kernel and firmware (uboot-tools) packages. In addition, there are mesa, gstreamer1 and ffmpeg to be up to date, but these are not really device-specific. I don't have to build mesa and gstreamer1 myself anymore because current releases already contain everything I need. So I have a generic system that works for all devices of an architecture, as long as there is sufficient mainline support in kernel and firmware. I use this system for my CuBox, my HummingBord, my NanoPC-T4, my Odroid-N2+ and my Odroid-M1. All devices of the same architecture can use the same SD card. And since I can build firmware for Odroid-C4, Odroid-HC4 and ROCK PI 4B, whose functionality is confirmed, my SD card will also work there in the same way. In order to be able to support other devices, only a suitable firmware and a corresponding DTB is necessary. As soon as the rkvdec2 driver support for an rk35xx SOC becomes available in the kernel, my Odroid M1 will also work with it without further intervention, since all userspace programs are already available and already use the other decoder IPs today. They just need to select a different dev/videoX. This is the outcome of my build: It is a generic build that serves all devices of the architecture for which it was built.
-
I don't even know what "vainfo" is. I' m running on a system that is build from pure mainline not even special designed for my devices but runs to it full capacity what mainline provides. I have to just patch in the request-api as that is missing in mainline. In a terminal run: mpv -v --hwdec=drm --hwdec-codecs=all youtube720p.mp4 Quit the playing and provide the log to see what is going on.
-
Usually I use armbian barely. So I can't really say anything about the nature of hers implementation. It's probably up to the user's decision. In terms of file name, it could follow a similar scheme to how I use it. Since the use of NOR flash has an influence on the usable eMMC speed, armbian probably also offers a variant with and without its support. I also use the DTB filename to express which features I've added. E.g. one of my DTB is: meson-g12b-odroid-n2-plus-con1-opp-uart.dtb with multiple overlays applied. This way I just need to replace the original DTB with the desired one, this works in any environment regardless of an overlay framework.
-
ffmpeg -hide_banner -decoders | grep request returns for me ... NOTHING also, but hwdec is working as expected. So your statement seems not to be correct. From "man mpv": NOTE: Even if enabled, hardware decoding is still only white-listed for some codecs. See --hwdec-codecs to enable hardware decoding in more cases. Which method to choose? If you only want to enable hardware decoding at runtime, don't set the parameter, or put hwdec=no into your mpv.conf (relevant on distros which force-enable it by default, such as on Ubuntu). Use the Ctrl+h default binding to enable it at runtime. If you're not sure, but want hardware decoding always enabled by default, put hwdec=auto-safe into your mpv.conf, and acknowledge that this may cause problems. If you want to test available hardware decoding methods, pass --hwdec=auto --hwdec-codecs=all and look at the terminal output. As the hacked in request-api codecs are not white-listed in mpv proper by default ==> so "--hwdec=auto --hwdec-codecs=all"
-
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/serial/amlogic,meson-uart.yaml
-
The patch set should be applied to the source for which it was designed, even if it can be applied cleanly to another source, there is no guarantee that a change in a non-contradictory place will not affect functionality. The specific ffmpeg version is not decisive here, since the request-api can be patched in for quite some time, e.g. I am currently still using a 5.x for other reasons. So if you've cloned the ffmpeg git repository, it's just a "git checkout <commit>" away to have a corresponding version. I grabbed the patches from @jernej's Github at the time, as his branch names indicate which version they belong to. Of course, the 5.x ones are long gone.
-
@Kwiboo did a fantastic job. Mainline u-boot becomes usable (console.log) for me. It enumerates my different systems on different storage media and can natively load the boot components from them. The use of compressed kernels does not work yet, it bails out like this: The u-boot control still has to be done with the serial console, as HDMI support is still missing, but it is still an early stage and the development is vivid. In the meantime, I have solved the problem with loading compressed kernels for me. Also, I can now use a USB keyboard (usbkbd) for stdin. Both require a correction of the mainline code, but since unfortunately too many in-flight patches have not yet landed, it is still too early to go into detail here. If you already want to play with the functionality that will be provided by the mainline u-boot in the future out-of-the-box, I have uploaded my firmware build here. The firmware can be put in place by: dd bs=512 seek=64 conv=notrunc,fsync if=u-boot-rockchip.bin of=/dev/${entire-device-to-be-used} into the firmware area of an SD card or an eMMC. If it is started during power-up by holding the SPI recovery button (RCY), it will scan all connected storage media for a usable bootflow. Supported boot methodes are: extlinux (extlinux.conf), efiboot and legacy-boot (boot.scr). As soon as video support is available, "U-Boot Standard Boot" will be fully usable, but VOB2 support is still open.