Jump to content

usual user

Members
  • Posts

    481
  • Joined

  • Last visited

Everything posted by usual user

  1. I asked you, as a first step, to upload the rk3399-rock-4c-plus.dtb, which is currently provided by the Armbian OS, to be able to check if the thermal zone is really not wired-up. There's no point in putting effort into fixing something that isn't broken.
  2. Exactly, either there is no thermal zone wired-up in the DT or the kernel build configuration lacks the necessary drivers enabled. To exclude the DTB, you can upload yours used in armbian and I can investigate accordingly.
  3. What does tmon tell about your thermal subsystem?
  4. I didn't claim to use Armbian. I'm using an OS that is compiled for an entire architecture (aarch64) and not for a specific single device. It works for any SOC, provided that a kernel can be built that uses mainline APIs. I can use the same storage device that contains the OS with all my devices (NanoPC-T4, ODROID-N2+, ODROID-M1, HoneyComb, ...). OK, I have some media copies, as flipping back and forth when using multiple devices at the same time is very annoying and impractical. The only requirement for this to be possible is the presence of a firmware that can boot the OS. And here's where my dilemma begins. The OS is 100% FREE & OPEN SOURCE, i.e. it does not provide firmware based on binary blobs. Firmware with mainline U-Boot as payload fulfills all the necessary requirements to boot my OS from any storage media and I have learned over time how to build it for my own. Armbian has always been a great help here, whether it's with knowledge from their forum or with legacy firmware builds as a workaround for my device bring-up until I was able to switch to mainline firmware. And since I've shared my firmware builds here and in other places, I also have confirmations that builds also work for devices I don't own (ODROID-C4, ODROID-HC4, ODROID-N2, Radxa ROCK Pi 4B, Radxa ROCK Pi 4C, ...). I would suggest you choose the OS you are most familiar with, or the one that best suits your needs as long as it uses recent mainline software. IMHO debian based distributions are way to stable ... outdated. Or you can find someone to backport all mainline improvements and deploy them in the appropriate distribution.
  5. It's a good thing that my distribution doesn't know about this rumor, otherwise it would probably have to stop working immediately. There are several Wayland implementations, e.g. the plasma-desktop or Gnome with Wayland backend. User space is not really device-specific and therefore no specific solution for a specific device is necessary. As long as there is sufficient mainline kernel support for the components of a device, this even works out-of-the-box, provided you choose suitable software programs. The operating system I'm using works unchanged according to the kernel support for all my devices comparably. The only solution that is not yet satisfactory is hardware-accelerated video decoding, as development for mainline support is still in full swing. But with a suitable SOC selection, there are solutions that leave almost nothing to be desired for comprehensive desktop operation. E.g. my rk3399 based device is feature complete for generic desktop usage. My rk3568-based device only lacks support for the rkvdec2 video decoder IP, the hantro support is already available. The worst is for the S922X, because there is currently no working video hardware acceleration in mainline. But with its CPU processing power, a video playback with moderate resolutions is still usable.
  6. To use mainline kernel hardware video acceleration in applications it requires basic support for V4L2-M2M hardware-accelerated video decode in the first place. E.g. firefox has just landed initial support for the h.264 decoder. The next showstopper would be the lack of V4L2-M2M support in the ffmpeg framework. There are some hack patches in the wild, but no out-of-the-box solution. The next showstopper would be the lack of kernel support for the SOC video decoder IP. The lack of support in the application is a general problem for all devices that provide their decoders via V4L2-M2M. The lack of support in the ffmpeg framework is a problem for all applications that base on it. Gstreamer-based applications, on the other hand, offer out-of-the-box solutions, as the necessary support is already integrated into the mainline gstreamer framework. When it comes to choosing a desktop environment, Xwindow is a bad choice. It was developed for x86 architecture and therefore cannot deal efficiently with V4L2-M2M requirements due to its design. Their developers at that time also recognized this and therefore developed Wayland from scratch. So when it comes to efficiently using a video pipeline in combination with a display pipeline, a desktop environment with Wayland backend is the first choice.
  7. usual user

    Odroid M1

    Since I haven't gotten any feedback on my build so far, I don't see much sense in it. In my experience, Armbian users tend to stick to legacy methods and my extensions only affect the upcomming U-Boot Standard Boot. In addition, the adoption of reviewed-by patches from the patchwork to the mainline tree seems to have stalled. So it may take some time until the general support is available out-of-the-box. And these are valuable improvements to the Rockchip platform, which also benefit ODROID-M1 support. Since Armbian already has a working boot method, I don't see any reason for an underpaid Armbian developer to invest his valuable time in an unfinished solution that anyone can have for free when everything finally lands. For me, it's different story because my preferred distro requires different prerequisites. I had made my build available so that others could save themselves the trouble of self-building, but there doesn't seem to be any particular interest.
  8. Not a missing package, it is the lack of basic support for V4L2-M2M hardware-accelerated video decode in the application in the first place. The next showstopper would be the lack of V4L2-M2M support in the ffmpeg framework. There are some hack patches in the wild, but no out-of-the-box solution. The next showstopper would be the lack of kernel support for the SOC video decoder IP. The lack of support in the application is a general problem for all devices that provide their decoders via V4L2-M2M. The lack of support in the ffmpeg framework is a problem for all applications that base on it. Gstreamer-based applications, on the other hand, offer out-of-the-box solutions, as the necessary support is already integrated into the mainline gstreamer framework.
  9. dpkg -l | grep libgbm bash: dpkg: command not found... I'm currently running mesa 23.0.3 and since libgbm is an integral part of mesa, the version is of course identical. However, the current version is not really important, as the necessary API support is already very mature. It is only important that it is built with the current headers of its BuildRequires to represent the status quo. Because the API between gbm and mpv does not change, but possibly by applying kernel patches between kernel and gbm (sun4i-drm_dri), it is probably more appropriate to rebuild mesa with the updated kernel headers. BTW, to get retroarch I would do: dnf install retroarch
  10. IMHO, you're using a software stack that's way too outdated. You are missing features and improvements that have already landed. I'm not an expert in analyzing compliance logs, but if you compare your log to the one provided by @robertoj above, you'll see that your kernel at least lacks features that its provides: Format ioctls: @robertoj test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK @mrfusion fail: v4l2-test-formats.cpp(263): fmtdesc.description mismatch: was 'Sunxi Tiled NV12 Format', expected 'Y/CbCr 4:2:0 (32x32 Linear)' @mrfusion test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: FAIL @robertoj test VIDIOC_G_FMT: OK @robertoj test VIDIOC_TRY_FMT: OK @robertoj test VIDIOC_S_FMT: OK @mrfusion fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT @mrfusion test VIDIOC_G_FMT: FAIL @mrfusion fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT @mrfusion test VIDIOC_TRY_FMT: FAIL @mrfusion fail: v4l2-test-formats.cpp(460): pixelformat 32315453 (ST12) for buftype 1 not reported by ENUM_FMT @mrfusion test VIDIOC_S_FMT: FAIL
  11. Mainline DTB has proper thermal zone configuration. The kernel can manage its temperature management on its own, there is no need for an error-prone userspace component to be involved. As long as the kernel binary has build all the necessary drivers, it works out-of-the-box.
  12. You want a working GBM as the first step. An application can use the APIs provided by the kernel via GBM. It becomes the master user of the drm/KMS, i.e. if e.g. Kodi uses the display subsystem via GBM, it is the only user who has access to it. Other graphics applications will not be able to use the display subsystem at the same time. A desktop environment framework accesses the display subsystem using the same kernel APIs as GBM, and therefore becomes the master of drm/KMS. However, it manages the display subsystem resources so that multiple applications can use them simultaneously. For example, Kodi can be displayed in one window and a web browser in another window at the same time. When it comes to choosing a desktop environment, Xwindow is a bad choice. It was developed for x86 architecture and therefore cannot deal efficiently with these new requirements due to its design. Their developers at that time also recognized this and therefore developed Wayland from scratch. So when it comes to efficiently using a video pipeline in combination with a display pipeline, a desktop environment with Wayland backend is the first choice. But the good thing is, nowadays the majority support in current mainline is already available out-of-the-box. So in order to support a particular device, it is only necessary that a mainline kernel with the necessary drivers is available and everything else is just generic up-to-date mainline software. For your device you have confirmed that LE works with a mainline kernel, so all necessary code should be available, only it is not yet integrated out-of-the-box into the mainline kernel.
  13. https://github.com/robertojguerra/orangepi-zero-full-setup/blob/main/README2.md
  14. With regard to the hardware decoder, this will not provide any new insights. For the video decoder it is only relevant that the subsequent display pipeline can process the decoder output format. Be it GBM, Wayland, Xwindow or /dev/null. See here for comparison a gstreamer video pipeline (gst-play-1.0-pipeline.pdf). It is always identical, regardless of which display pipeline (GstXvImageSink) ultimately comes to fruition. The decoder component (v4l2slh264dec) is backed by the respective hardware and is unmodified interchangeable as long as v4l2-compliance shows sufficient driver conformity. No, the ultimate goal is to compose a video pipeline that is backed by a hardware decoder and to have a display pipeline that can display the format provided by the video pipeline in a hardware-accelerated manner. If I keep seeing the long dependency lists that are given to install when it comes to building a software, I would guess that it does not exist. But since this is about the kernel package, I don't see the problem here because it is self-contained and does not require any external dependencies. I see here rather the problem of using newly built kernels comfortably parallel in Armbian. This is due to the boot method and the way Armbian installs its kernel (a single kernel, not multiple kernels in parallel). But this can be changed, as I have already demonstrated several times in other forum threads. This works because the subsequent display pipeline (dev/null) can handle any format. The display driver is only one part of the display pipeline, but probably the component that does not yet provide all the necessary prerequisites. This will certainly be added in LE with their kernel patches so that they can use a hardware-accelerate display pipeline. A display pipeline has no business with video decoding, we're not in x86 architecture, it deals only with already decoded content.
  15. Sorry, but I don't quite understand what you're asking for here. When it comes to improving the display subsystem, I would start by rebuilding my kernel package. Since you have confirmed that LibreElec works, my first approach would be to apply all relevant LE patches. But I understand that it is apparently not so easy to implement with your build system.
  16. https://docs.mesa3d.org/drivers/lima.html
  17. Media support for the rk3399 SOC is working since a long time with pure mainline code. With a modern software stack and current mainline releases, this even works out-of-the-box. Only the HEVC decoder support in the kernel has not yet landed, but there are WIP patches that make this usable also. See gst-play-1.0-pipeline.pdf for reference.
  18. Hmm, when I take another look at my previously posted ffmpeg.spec addition, I see that I also carry a patchset with the name like that from a branch of @jernej github. Furthermore, I see that I also apply other patches from the v4l2-drmprime-n5.1.2 branch. Do you have similar patches in your portfolio?
  19. The download link was only included because the file was already stored for another purpose and was intended to demonstrate a real-world example. The file could possibly be used for a quick test, but is not suitable for permanent integration into Armbian as is. I usually upload files that should remain available for Armbian in the long term in the forum. I always have my doubts about binary files, because their validity is not easy to verify, so I think the warning on the download page is just right. Since the file can only be found via the reference link, it is just as trustworthy as if it were deposited directly in the forum and the user should be aware of what he is getting.
  20. If everything is based on mainline code, it is only a matter of reverse engineering the build configuration to replicate it in your own build system. However, you should check if LE is still applying patches to the drm/kms display subsystem in the kernel, as I don't know what the out-of-the-box support is for your device in this regard. This could also be a reason why the VPU cannot be used. And that's not something where userspace code is involved.
  21. Wow, great way to get access to all user data. Hope that there were no confidential data or passwords, because their integrity is no longer guaranteed. And if the NVME is already "read only", the user wouldn't even be able to remove them beforehand.
  22. I don't know the inner structure of ffmpeg and I don't know what actions trigger corresponding compiler switches, but since the request-api is only an extension in v4l2, I would guess that the base v4l2 support is also needed. Does this system run on current mainline software or does it use legacy code?
  23. Maybe the firmware gets uploaded to LVFS then it can be installed in different ways out-of-the-box.
  24. I don't know much about the ffmpeg framework and even less about the interpretation of its protocols, but IMHO the display subsystem ([vo/gpu/opengl/kms] Selected mode: PAL (720x576@50.00Hz)) does not support the necessary prerequisites to serve the hardware decoder output. That is the reason why the VPU is not selected. I don't know if this is a hardware limitation, or just lack of display driver support for your device. As @jernej has confirmed, drm_prime is necessary for VPU usage and this is not mentioned in your logs. Xwindow is also only based on drm/kms and therefore cannot bring any improvement. Even if the display subsystem provides all the prerequisites, it is not a good choice, as it does not work efficiently with VPUs by design. Here, a desktop environment based on a Wayland backend delivers better results. Neither, I haven't even built my own system. Debian based systems are IMHO way too stable - outdated. I prefer a system that follows mainline releases more closely. I'm using an RPM-based system and I just need to recompile the ffmpeg source package that has been supplemented with my extensions. All I have to do is to install the ffmpeg-5.1.2-3.fc37.src.rpm, copy the patches into the ~/rpmbuild/SOURCES directory and make these changes to the ~/rpmbuild/SPECS/ffmpeg.spec file: After that, a "dnf buildep ~/rpmbuild/SPECS/ffmpeg.spec" installs all the necessary dependencies and I can leave everything else to my build system with "rpmbuild -ba ~/rpmbuild/SPECS/ffmpeg.spec". Because this procedure can be applied to any RPM package, I can rebuild any package in my distribution to make changes if necessary. Since only a small proportion of packages are device-dependent, it is absolutely not necessary to compile a distribution completely by myself. I have to build only the kernel and firmware (uboot-tools) packages. In addition, there are mesa, gstreamer1 and ffmpeg to be up to date, but these are not really device-specific. I don't have to build mesa and gstreamer1 myself anymore because current releases already contain everything I need. So I have a generic system that works for all devices of an architecture, as long as there is sufficient mainline support in kernel and firmware. I use this system for my CuBox, my HummingBord, my NanoPC-T4, my Odroid-N2+ and my Odroid-M1. All devices of the same architecture can use the same SD card. And since I can build firmware for Odroid-C4, Odroid-HC4 and ROCK PI 4B, whose functionality is confirmed, my SD card will also work there in the same way. In order to be able to support other devices, only a suitable firmware and a corresponding DTB is necessary. As soon as the rkvdec2 driver support for an rk35xx SOC becomes available in the kernel, my Odroid M1 will also work with it without further intervention, since all userspace programs are already available and already use the other decoder IPs today. They just need to select a different dev/videoX. This is the outcome of my build: It is a generic build that serves all devices of the architecture for which it was built.
  25. I don't even know what "vainfo" is. I' m running on a system that is build from pure mainline not even special designed for my devices but runs to it full capacity what mainline provides. I have to just patch in the request-api as that is missing in mainline. In a terminal run: mpv -v --hwdec=drm --hwdec-codecs=all youtube720p.mp4 Quit the playing and provide the log to see what is going on.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines