Jump to content

usual user

Members
  • Posts

    533
  • Joined

  • Last visited

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. According to the circuit diagram, the reset button is connected to the hardware reset lines, so nothing can prevent forcing the reset state.
  2. At least that explains the result of the nvme scan command. Now it remains to find out why this is the case, but since improvements are still pending for PCIe support even in the mainline kernel, the question arises whether all of this has already been migrated into the firmware.
  3. u-boot-rockchip-spi.bin is a firmware. Out of pure curiosity, what is the result of 'pci enum' on the firmware console?
  4. IMHO, OP wants to boot OS from NVME while the firmware is stored in the SPI flash, but he is using firmware in which the necessary support is not enabled.
  5. I usually use Falkon in a Plasma environment with Wayland backend.
  6. I haven't looked at this use case for a very long time. I can no longer remember since when it has worked out-of-the-box for me. Since decoder support has been part of the GStreamer framework for a very long time, hardware-supported video decoding works for all browsers that use this framework with the standard packages of the distribution of my choice. When v4lrequest support was still implemented with the out-of-tree patches using the stateful method, it also worked with Firefox out-of-the-box. Just an accordingly patched FFmpeg framework was required. This is likely no longer going to work with the current patches for the FFmpeg framework and requires an additional implementation in Firefox. I suspect, however, that this will only happen after the official inclusion of v4lrequest support in the FFmpeg framework, as is also the case with MPV. To what extent patches for Firefox are already available is unknown to me. For the distribution of my choice, I have in any case rebuilt the FFmpeg and MPV packages with the corresponding patches. I have to confess that I usually use Firefox and the video decoding works flawlessly for my use cases. However, I cannot say whether this is actually hardware-accelerated, because the SBCs I use with a graphical Desktop are powerful enough to function sufficiently even with only software decoding. I'm just taking the lazy way here and waiting for it to end up in Manline. For SBCs that need hardware acceleration, I simply use a browser that uses the GStreamer framework.
  7. From an OS's point of view, only a Mesa build with Teflon and Rocket driver support is needed (available since Mesa 25.3). Inferences can then be executed with ai-edge-litert. I have been experimenting with this for some time on all my devices equipped with Rockchip RK3588/RK3588s.
  8. I am currently at 7.0.0-rc1. I can upload my jump-start image so you can check if my kernel build works with your device. If you like what you see, it is only a 'prepare-jump-start ${target-mount-point}' away to install the kernel package alongside your existing system. I know about it, but since it is just another not mainline solution with another dependency mess, I am not particularly interested.
  9. Since the hardware support for Rockchip SoCs in the mainline kernelis generally already very outstanding and their further development is also being actively pursued, I only have SBCs with integrated NPUs that are based on them. Among them are ODROID-M2, NanoPC-T6, and ROCK-5-ITX. But since the NPU is an integral part of the SoC, the board manufacturer and the design of the SBC are not necessarily of importance. As far as I understand, edge-class NPUs are best suited for computer vision tasks. I am therefore engaged in object detection: and super-resolution:
  10. This is what my software stack looks like: My kernel is build as a generic one, hence my OS is working on any device equipped with a VeriSilicon VIPNano, a Rockchip RK3588 or an Arm Ethos-U65/U85 NPU. The application can be written NPU-agnostic, as long as a model.tflite file suitable for the NPU is used.
  11. The patches that were available out-of-tree for a long time were a kind of hack using the DRM subsystem for decoding. For inclusion in mainline, they were further developed into a more correct request method. It is hwdec=v4l2request-copy in fact, because the stateless decoder is an m2m device and the scan-out is still carried out via the DRM subsystem. However, the copy is cheap because it is executed via dmabuf as zero copy.
  12. mpv_--hwdec=help.logis what I get, and everything works as expected, but I am on current mainline releases with in-flight patches for mpv and ffmpeg on top. Gstreamer framework based applications work out-of-the-box. The log entries that contain the 'request' component are the ones that matter. But you're right, it can still take a while before current mainline releases are declared stable by some distributions and adopted. But this is not the fault of mainline development, which continues to progress and does not take outdated versions into account any longer.
  13. Lately, I've been playing around a bit with computer vision detection. I managed to patch together a PoC script with which I conducted some tests. The results are quite promising. The frame rate is just based on the round trip time of my test script, so it only roughly reflects the inference time. The throughput includes all additional overhead but is sufficiently informative for a relative comparison. Inference on a single CPU core delivers an image throughput of about 4 images: Inference on a single NPU core delivers an image throughput of about 17 images: Inference on eight CPU cores delivers an image throughput of about 21 images. But all eight cores run over 80% during this, and after a short time the fan kicks in. The headroom is also quite limited, for e.g., to perform other tasks concurrently. Running several similar inference tasks concurrently immediately results in a proportional drop in frame rate per task. When six similar inference tasks are executed simultaneously with NPU delegates, they are distributed across the three available NPU cores, and the SoC utilization is moderate enough that the fan doesn't even turn on. The throughput does not degrade and the CPU cores remain available for other tasks as well: For my tests, I used a random video clip. For the inference, I used a model pre-trained with the COCO dataset. With its 4.1MB memory size and its 80 object classes, it delivers surprisingly good results. Using the NPU hardware not only reduces the load on the CPU cores but also provides additional acceleration of processing. But the best part is that only current mainline code is required for use. No dependencies on proprietary implementations or outdated software stacks. It just works out-of-the-box, you just need to know how to use it.
  14. Armbian is carrying it for quite some time. So no excuse to not just use it already.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines