Walter Zambotti Posted February 2 Posted February 2 If there is anyone still running an older version of the kernel and the USB-C is working it would be helpful if I can get a copy of your rk3588s-odroid-m2.dtb. 0 Quote
Walter Zambotti Posted Wednesday at 03:10 PM Posted Wednesday at 03:10 PM I managed to get some USB-C functionality back by adding a device tree overlay. cat odroid-usbc-fix.dts /dts-v1/; /plugin/; /* 1. Force the USB controller into Host mode */ &{/usb@fc000000} { dr_mode = "host"; status = "okay"; /delete-property/ usb-role-switch; /delete-node/ port; }; /* 2. Target the specific USB3 Type-C regulator by its absolute path */ &{/regulator-5v0-vcc-usb3-typec} { regulator-always-on; regulator-boot-on; status = "okay"; }; /* 3. Target the USB3 Host regulator by its absolute path */ &{/regulator-5v0-vcc-usb3-host} { regulator-always-on; regulator-boot-on; status = "okay"; }; However this disables OTG and devices with strict udev switching timing requirements fail to switch. So my QHY CCD camera is correctly detected as a westbridge device and udev / fxload commences loading the firmware but the power to the device is toggled and the firmware load fails as a result. If I plug the QHY CCD in to the USB-C port via a powered USB-C hub then everything works fine. This works because the HUB is providing constant power. So there appears to be a lot of difference between the Radxa USB-C and Odroid USB-C implementation that will need someone better at kernel stuff than me. 0 Quote
usual user Posted 1 hour ago Posted 1 hour ago Lately, I've been playing around a bit with computer vision detection. I managed to patch together a PoC script with which I conducted some tests. The results are quite promising. The frame rate is just based on the round trip time of my test script, so it only roughly reflects the inference time. The throughput includes all additional overhead but is sufficiently informative for a relative comparison. Inference on a single CPU core delivers an image throughput of about 4 images: Inference on a single NPU core delivers an image throughput of about 17 images: Inference on eight CPU cores delivers an image throughput of about 21 images. But all eight cores run over 80% during this, and after a short time the fan kicks in. The headroom is also quite limited, for e.g., to perform other tasks concurrently. Running several similar inference tasks concurrently immediately results in a proportional drop in frame rate per task. When six similar inference tasks are executed simultaneously with NPU delegates, they are distributed across the three available NPU cores, and the SoC utilization is moderate enough that the fan doesn't even turn on. The throughput does not degrade and the CPU cores remain available for other tasks as well: For my tests, I used a random video clip. For the inference, I used a model pre-trained with the COCO dataset. With its 4.1MB memory size and its 80 object classes, it delivers surprisingly good results. Using the NPU hardware not only reduces the load on the CPU cores but also provides additional acceleration of processing. But the best part is that only current mainline code is required for use. No dependencies on proprietary implementations or outdated software stacks. It just works out-of-the-box, you just need to know how to use it. 0 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.