• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Rock5 model B is up for preorder now - pay $5 now and get $50 off when it ships in Q2 Its going to be a beast - about 4x performance of rk3399 multi core and GPU up to 10x performance in some tests
  2. ScottP

    Mainline VPU

    Your kernel headers dont match FFmpeg.
  3. The method to install home assistant supervised in the armbian script does not work any more, even if you have a version of their script there are assets it tries to pull from web locations which have been removed so you have to use their new method. This part of the Armbian script: debconf-apt-progress -- apt-get install -y apparmor-utils apt-transport-https avahi-daemon ca-certificates \ dbus jq network-manager socat software-properties-common curl -sL "" | \ bash -s -- -m ${machine} should be replaced with: apt-get install \ jq \ wget \ curl \ udisks2 \ libglib2.0-bin \ network-manager \ dbus -y #next line is for 64bit platforms, need arm7 instead of aarch64 for 32bits wget dpkg -i os-agent_1.0.0_linux_x86_64.deb wget dpkg -i homeassistant-supervised.deb I would not advise using armbian-config to do this since it it very easy to miss error messages since when the installer finishes the screen instantly refreshes and errors are lost, then you have to hunt for a log file if one exists.
  4. ScottP

    Mainline VPU

    Hear is a lengthy reply I put on the Frigate NVR github for someone asking about hardware decoding for Frigate NVR. I would be interested if I am doing anything wrong here or I have missed a step. TL;DR It does not work reliably for me ATM but this is the closest to working I have seen so far. Work is ongoing in linux kernel and FFmpeg, it may work reliably sometime in the future. When the kernel drivers are moved out of staging and the interface to them is stable I expect to see a pull request on the main FFmpeg git. This is a long reply with information to test because I am giving up at this point and moving to a different platform. I would be interested if you find a solution though, or that I have missed something - hence the detailed reply. For testing you can try this fork of ffmpeg It has v4l2-request and libdrm stateless VPU decoding built in using hantro and rockchip_vdec/rkvdec. use kernel 5.14.9, armbian is a convenient way to change kernels - sudo armbian-config -> system -> Other kernels. FFmpeg from the above github has private headers for kernel interfaces and they are updated about a month after each release. You must install the correct userspace kernel headers, I just get the kernel source from and then do `make -j6 headers_install INSTALL_HDR_PATH=/usr` Do not use amrbian-config to install kernel headers - it installs the wrong version. Then install FFmpeg dependencies: `sudo apt install libudev-dev libfreetype-dev libmp3lame-dev libvorbis-dev libwebp-dev libx264-dev libx265-dev libssl-dev libdrm2 libdrm-dev pkg-config libfdk-aac-dev libopenjp2-7-dev` Run configure, this is a minimal set of options, frigate includes many more though, I removed many of them to build faster and save memory (I actually think there are a lot of redundant ffmpeg components in frigates default build files, some X11 frame grabber stuff and codecs nobody uses anymore, but thats for a separate discussion): ``` ./configure \ --enable-libdrm \ --enable-v4l2-request \ --enable-libudev \ --disable-debug \ --disable-doc \ --disable-ffplay \ --enable-shared \ --enable-libfreetype \ --enable-gpl \ --enable-libmp3lame \ --enable-libvorbis \ --enable-libwebp \ --enable-libx265 \ --enable-libx264 \ --enable-nonfree \ --enable-openssl \ --enable-libfdk_aac \ --enable-postproc \ --extra-libs=-ldl \ --prefix="${PREFIX}" \ --enable-libopenjpeg \ --extra-libs=-lpthread \ --enable-neon ``` Then `make -j6` I dont know if this next bit is correct, but it works for me, I dont want to do `make install` just run the ffmpeg tests from the build directory, to run tests you must run `sudo ldconfig $PWD $PWD/lib*` first otherwise linker will not find libraries. If you want to try a different kernel version run `make distclean` in FFmpeg and run ./configure again. If FFmpeg fails to build it will be because private headers do not match kernel headers. errors like V4L... undefined etc Then you can do some tests and see if you get valid output, for example, this decodes 15s from one of my cams: `./ffmpeg -benchmark -loglevel debug -hwaccel drm -i rtsp:// -t 15 -pix_fmt yuv420p -f rawvideo out.yuv` Checks to make during and after decoding: Observe CPU usage, on my system rk3399 with 1.5Ghz little core and 2Ghz big core overclock I get between 17 and 25% cpu on one core, it varies if it runs on a53 little core or a72 big core. It should be better than that, I think its the way that the data is copied around in memory. Gstreamer or mpv attempt to do zero copy decoding so its more efficient. With software decoding CPU use is about 70% of one core. RK3328 does not have the two a72 cores and four a53 cores that RK3399 has, just four a53 cores so rk3328 about half as powerful as RK3399 as the a72 cores are about twice as powerful as the a53 cores. You should see in the debug output for ffmpeg where it tries each of the /dev/video interfaces to find the correct codec for decoding. Be warned that ffmpeg will sometimes just fall back to software decode, if that happens you will see much higher CPU usage and often ffmpeg will spawn a number of threads to use all cores in your system. Your user should be a member of the "video" group in /etc/group to access without sudo. Log snippet of that section below: ```[h264 @ 0xaaab06cd9070] Format drm_prime chosen by get_format(). [h264 @ 0xaaab06cd9070] Format drm_prime requires hwaccel initialisation. [h264 @ 0xaaab06cd9070] ff_v4l2_request_init: avctx=0xaaab06cd9070 hw_device_ctx=0xaaab06c549a0 hw_frames_ctx=(nil) [h264 @ 0xaaab06cd9070] v4l2_request_probe_media_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/media1 driver=hantro-vpu [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video1 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_try_format: pixelformat 875967059 not supported for type 10 [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: try output format failed [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video2 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_try_format: pixelformat 875967059 not supported for type 10 [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: try output format failed [h264 @ 0xaaab06cd9070] v4l2_request_probe_media_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/media0 driver=rkvdec [h264 @ 0xaaab06cd9070] v4l2_request_probe_video_device: avctx=0xaaab06cd9070 ctx=0xffff8804df20 path=/dev/video0 capabilities=69222400 [h264 @ 0xaaab06cd9070] v4l2_request_init_context: pixelformat=842094158 width=1600 height=912 bytesperline=1600 sizeimage=2918400 num_planes=1 [h264 @ 0xaaab06cd9070] ff_v4l2_request_frame_params: avctx=0xaaab06cd9070 ctx=0xffff8804df20 hw_frames_ctx=0xffff8804faa0 hwfc=0xffff8804e530 pool=0xffff8805e910 width=1600 height=912 initial_pool_size=3 ``` Check that the output file contains valid video data, try playing it using vlc: `vlc --rawvid-fps 10 --rawvid-width 1600 --rawvid-height 900 --rawvid-chroma I420 out.yuv` adjust the command to what height/width/fps your cameras record in. If all this is working then try doing longer decodes in parallel, eg is you have 3 cams run the ffmpeg command for each of them in a separate window and increase the time. What happens to me is that at some point ffmpeg will start reporting "resource not available/busy" or similar, rebooting will make it work for a while again. You can check what codecs are supported by each of the interfaces /dev/video[012] by `v4l2-ctl --all -d0` change d0 to d1 d2 etc to view the other decoders/encoders You can monitor the state of kernel development Most of the work on this is being done by Andrzej Pietrasiewicz. My suggestion is monitor both the ffmpeg github and kernel commits/patches, find out when they rebase ffmpeg. Pull that version and install the current kernel for it plus headers and retest. I have all the frigate docker files already created. I basically created a new set of dockerfiles with an arch of aarch64rockchip and added those to Makefile. I'll upload them to my github at some point, I see little point to a pull request since rockchip is a niche platform with not many users in home assistant or frigate, and it does not currently work for me reliably anyway. I have been trying to get this working for some time now, at kernel 5.4.* there were a bunch of kernel patches you had to apply. Nothing worked for me then. Often FFmpeg complained about the pixel format. There were some people on Armbian forums who claimed to have it working, but I had my doubts, maybe it was wishful thinking and ffmpeg was really using software decode. Most of the effort around this is for video playback so people can play 1080p and 2/4k videos on desktop and kodi. There is little information about straight decoding to a pipe like frigate. So in research ignore stuff to do with patched libva etc. For now I am using an old ~2013 i5-4670 four core/thread Haswell with Nvidia GT640 GPU for Frigate and Home Assistant. For three cams at 1600*900 10fps Frigate uses 6% CPU as reported by Home Assistant supervisor. It is very stable. With that in mind and wanting to use a more power efficient system I caved and ordered a Nvidia Jetson 4GB developer kit yesterday. I have confidence I can build Frigate docker containers for that system and it has a similar hardware decoder as their GPUs, I can also try out using CUDA filters and scaling to reduce CPU load for Frigate detector. A start would be to copy the amd64nvidia dockerfiles and create aarch64nvidia arch and modify from there it should be mostly the same.
  5. Confirm this patch works for me on nanopc t4 on kernel 5.14.5, applied and built locally.
  6. Edit: This is a duplicate already reported and a pull request in place. mods please delete The solution is to revert a patch related to power management. Symptoms are that ethernet link comes up, but no ip, dhcp, cannot assign ip address. the following patch resolves the issue on my system. I did not raise this in bug tracker since its not allowed to put things in there that are not supported. Unsure what else to do with info, hopefully it will help someone else. From ad63cb5d37f9634fc097249ddda4240c10f041d7 Mon Sep 17 00:00:00 2001 From: Dan Johansen <> Date: Tue, 7 Sep 2021 16:26:09 +0200 Subject: [PATCH] Revert "net: stmmac: dwmac-rk: fix unbalanced pm_runtime_enable warnings" This reverts commit 2d26f6e39afb88d32b8f39e76a51b542c3c51674. --- drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c index ed817011a94a..280ac0129572 100644 --- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c +++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c @@ -21,6 +21,7 @@ #include <linux/delay.h> #include <linux/mfd/syscon.h> #include <linux/regmap.h> +#include <linux/pm_runtime.h> #include "stmmac_platform.h" @@ -1528,6 +1529,9 @@ static int rk_gmac_powerup(struct rk_priv_data *bsp_priv) return ret; } + pm_runtime_enable(dev); + pm_runtime_get_sync(dev); + if (bsp_priv->integrated_phy) rk_gmac_integrated_phy_powerup(bsp_priv); @@ -1536,9 +1540,14 @@ static int rk_gmac_powerup(struct rk_priv_data *bsp_priv) static void rk_gmac_powerdown(struct rk_priv_data *gmac) { + struct device *dev = &gmac->pdev->dev; + if (gmac->integrated_phy) rk_gmac_integrated_phy_powerdown(gmac); + pm_runtime_put_sync(dev); + pm_runtime_disable(dev); + phy_power_on(gmac, false); gmac_clk_enable(gmac, false); } -- 2.33.0
  7. ScottP

    Mainline VPU

    Working for me on kernel 5.14.5 with ffmpeg h264 1600*900 15fps stream from my cams use 17-22% of one CPU about 5% of which is pxfmt conversion because it reduces by that amount without conversion - no further kernel patches and ffmpeg dependencies all installed from default repos. This is awesome I've been wanting this for a while, my use case is Frigate NVR which is currently running on an old intel system with a nvidia gpu doing the decoding. I can now revert my nanopc t4 to the task and save some electricity. My ffmpeg command that emulates approximately what Frigate does: ffmpeg -loglevel warning -hwaccel drm -i rtsp:// -pix_fmt yuv420p -f rawvideo pipe: Many thanks @jernejkwiboo and everyone else who made this possible Now need to find out why get block i/o errors and corruption on emmc on all 5.13 and 5.14 kernels I have tried, doing testing from sdcard for now.
  8. ScottP

    eMMC errors

    I get similar errors from any 5.13 or 5.14 kernel on nanopct4 which also has rk3399. Works OK on 5.10
  9. maybe look at galliumOS for chromebooks. Being battery powered devices you likely want video hw acceleration and battery friendly tweaks, for rk3399 some work for chromebooks was leveraged for wider community. The other thing is trackpad support which is notoriously bad in linux, galliumOS uses original ChromeOS drivers. Has XFCE Xubuntu desktop
  10. can disable zram in /etc/default/armbian-zram-config - I dont see need for it with 4GB ram and some swap on SSD, idea is likely to preserve sdcards and emmc on systems with low memory on rk3399 some memory is reserved for GPU/VPU I think so on nanopc T4: U-Boot 2020.10-armbian (Mar 08 2021 - 17:46:43 +0000) SoC: Rockchip rk3399 Reset cause: RST Model: FriendlyElec NanoPC-T4 DRAM: 3.9 GiB
  11. ScottP

    Mainline VPU

    I am attempting to decode h264 rtsp stream from cameras using vpu. For this ffmpeg just needs to decode and write output to a pipe in the same yuv420p pixel format. I have tried various branches from @Kwiboo github on Armbian with 5.10.21 kernel. when I try with for example ./ffmpeg -loglevel debug -hwaccel drm -i ~/Jellyfish_1080_10s_30MB.mp4 -benchmark -f null - I get this and it falls back to software decoding [h264 @ 0xaaaac0c432f0] Format drm_prime chosen by get_format(). [h264 @ 0xaaaac0c432f0] Format drm_prime requires hwaccel initialisation. [h264 @ 0xaaaac0c432f0] ff_v4l2_request_init: avctx=0xaaaac0c432f0 hw_device_ctx=0xaaaac0c3e2e0 hw_frames_ctx=(nil) [h264 @ 0xaaaac0c432f0] v4l2_request_probe_media_device: avctx=0xaaaac0c432f0 ctx=0xffffa8098030 path=/dev/media0 driver=hantro-vpu [h264 @ 0xaaaac0c432f0] v4l2_request_probe_video_device: avctx=0xaaaac0c432f0 ctx=0xffffa8098030 path=/dev/video1 capabilities=69222400 [h264 @ 0xaaaac0c432f0] v4l2_request_try_format: pixelformat 875967059 not supported for type 10 [h264 @ 0xaaaac0c432f0] v4l2_request_probe_video_device: try output format failed [h264 @ 0xaaaac0c432f0] v4l2_request_probe_video_device: avctx=0xaaaac0c432f0 ctx=0xffffa8098030 path=/dev/video2 capabilities=69222400 [h264 @ 0xaaaac0c432f0] v4l2_request_try_format: pixelformat 875967059 not supported for type 10 [h264 @ 0xaaaac0c432f0] v4l2_request_probe_video_device: try output format failed [h264 @ 0xaaaac0c432f0] v4l2_request_probe_media_device: avctx=0xaaaac0c432f0 ctx=0xffffa8098030 path=/dev/media1 driver=rkvdec [h264 @ 0xaaaac0c432f0] v4l2_request_probe_video_device: avctx=0xaaaac0c432f0 ctx=0xffffa8098030 path=/dev/video3 capabilities=69222400 [h264 @ 0xaaaac0c432f0] v4l2_request_probe_video_device: set controls failed, Invalid argument (22) [h264 @ 0xaaaac0c432f0] Failed setup for format drm_prime: hwaccel initialisation returned error. [h264 @ 0xaaaac0c432f0] Format drm_prime not usable, retrying get_format() without it. [h264 @ 0xaaaac0c432f0] Format yuv420p chosen by get_format(). Using strace I see that the Invalid Argument (22) is coming from ioctl on /dev/video[0123] as it cycles through each one. It may be pixel format 875967059 that is the issue here, how can that be resolved, kernel patches? I feel I may be missing something fundamental or not understanding something correctly. For example is a window manager and X11 required for any of this? Most of the discussions are about rendering from what I can gather. My requirement is for ffmpeg to decode and output to pipe while saving segments along the way in original form, and also possibly serving up stream via rtmp too in original format. From the pipe the stream is examined for motion, if found passed off to object recognition on a coral usb device. Would I would be better off trying to get this working on the legacy kernel? edit: removed -hwaccell_output drm_prime from ffmpeg command, its the same error without it.
  12. I have a NanoPC T4 (RK3399) with Friendly ARM ubuntu distro NOT Armbian as yet - I plan to migrate hence why I am reading these forums If this provides a data point? scott@hass:~$ sudo lspci -vv | grep -E 'PCI bridge|LnkCap' 00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 0100 (prog-if 00 [Normal decode]) LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L1, Exit Latency L0s <256ns, L1 <8us LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L0s unlimited, L1 <64us scott@hass:~$ sudo hdparm -t /dev/nvme0n1p2 /dev/nvme0n1p2: Timing buffered disk reads: 1784 MB in 3.00 seconds = 594.26 MB/sec