Johson Posted 14 hours ago Posted 14 hours ago (edited) I've already benchmarked a computer-interactivbe task (like image processing) on the Orangepi 5 plus's processor——and it wrapped up in just 40 minutes, which is impressively fast. Therefore, I' like to try the NPU. Anyone know how to tap into its NPU for AI workloads? Edited 14 hours ago by Johson 0 Quote
robertoj Posted 8 hours ago Posted 8 hours ago (edited) I would like to know too. The orange pi zero 3 has a GPU that's available for SIMD acceleration through the latest OpenGLES library, but I haven't had time for that: https://ai.google.dev/edge/mediapipe/framework/getting_started/gpu_support Try it in your opi5+, then run the mediapipe python examples. Then look for other neural network tasks that use the same NN engine: tensor flow light (tensor flow, pytorch will need a different method) Other examples: https://forum.armbian.com/topic/28895-efforts-to-develop-firmware-for-h96-max-v56-rk3566-8g64g/#comment-167001 https://opencv.org/blog/working-with-neural-processing-units-npus-using-opencv/ Edited 8 hours ago by robertoj 0 Quote
usual user Posted 43 minutes ago Posted 43 minutes ago FWIW, on my rk3588 devices the NPUs are working with recent mainline releases: [ 5.967316] [drm] Initialized rocket 0.0.0 for rknn on minor 0 [ 5.975499] rocket fdab0000.npu: Rockchip NPU core 0 version: 1179210309 [ 5.978652] rocket fdac0000.npu: Rockchip NPU core 1 version: 1179210309 [ 5.985602] rocket fdad0000.npu: Rockchip NPU core 2 version: 1179210309 This script runs the Mesa example with the latest available working versions: Spoiler #!/bin/bash IMAGE="grace_hopper.bmp" WORKBENCH="." ENVIRONMENT="${WORKBENCH}/python/3.11" [ "${1}" == "setup" ] || [ ! -f ${ENVIRONMENT}/bin/activate ] && BOOTSTRAP="true" [ -v BOOTSTRAP ] && python3.11 -m venv ${ENVIRONMENT} source ${ENVIRONMENT}/bin/activate [ -v BOOTSTRAP ] && pip install numpy==1.26.4 [ -v BOOTSTRAP ] && pip install pillow==12.0.0 [ -v BOOTSTRAP ] && pip install tflite-runtime==2.14.0 TEFLON_DEBUG=verbose ETNA_MESA_DEBUG=ml_dbgs python ${WORKBENCH}/classification-tflite.py \ -i ${WORKBENCH}/${IMAGE} \ -m ${WORKBENCH}/mobilenet_v1_1_224_quant.tflite \ -l ${WORKBENCH}/labels_mobilenet_quant_v1_224.txt \ -e /usr/lib64/libteflon.so deactivate And with this script, the Mesa example runs, with a small adjustment, also with the TFLite successor LiteRT: Spoiler #!/bin/bash IMAGE="grace_hopper.bmp" WORKBENCH="." ENVIRONMENT="${WORKBENCH}/python/3.13" [ "${1}" == "setup" ] || [ ! -f ${ENVIRONMENT}/bin/activate ] && BOOTSTRAP="true" [ -v BOOTSTRAP ] && python3.13 -m venv ${ENVIRONMENT} source ${ENVIRONMENT}/bin/activate [ -v BOOTSTRAP ] && pip install pillow [ -v BOOTSTRAP ] && pip install ai-edge-litert-nightly TEFLON_DEBUG=verbose ETNA_MESA_DEBUG=ml_dbgs python ${WORKBENCH}/classification-litert.py \ -i ${WORKBENCH}/${IMAGE} \ -m ${WORKBENCH}/mobilenet_v1_1_224_quant.tflite \ -l ${WORKBENCH}/labels_mobilenet_quant_v1_224.txt \ -e /usr/lib64/libteflon.so deactivate A MediaPipe sample can also be set up easily: Spoiler #!/bin/bash WORKBENCH="." ENVIRONMENT="${WORKBENCH}/python/3.12" [ "${1}" == "setup" ] || [ ! -f ${ENVIRONMENT}/bin/activate ] && BOOTSTRAP="true" [ -v BOOTSTRAP ] && python3.12 -m venv ${ENVIRONMENT} source ${ENVIRONMENT}/bin/activate [ -v BOOTSTRAP ] && pip install mediapipe [ -v BOOTSTRAP ] && pip install pillow [ -v BOOTSTRAP ] && pip install ai-edge-litert-nightly python ${WORKBENCH}/detect.py --model efficientdet_lite0.tflite But unfortunately, the MediaPipe framework does not support the extended delegate functionality of LiteRT (TFLite). And therefore no NPU support. classification-3.11-tflite.logclassification-3.13-litert.logobject_detection-3.12-litert.log 0 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.