Jump to content

Recommended Posts

Posted
On 8/21/2020 at 8:45 PM, jock said:

I don't understand exactly what you mean when you say "libreelec dtb" and adding "dtb"...

Newer images (like those in the armbian download page) use device tree overlays, so you don't have to substitute or handle dtb files manually.

 

Did you try to follow my suggestion to run rk322x-config, don't reboot yet but remove emmc string  from overlays in /boot/armbianEnv.txt and then reboot?

That's not working either.. I'll stick with:

overlay_prefix=rk322x
fdtfile=rk322x-box.dtb

 

Posted
8 hours ago, nokirunner said:

why when we force "modesetting" on card1 we have 2D acceleration but the 3D acceleration disappears?

Reread my log analysis of Xorg.0_driver_as_rockchip.log. You setup two screens there. One driven via modesetting with the 3D render node (card1) as the display subsystem with no scan-out hardware. You can't see the result of any 3D hardware rendering on screen 0 on any monitor. The second screen is driven by fbdev with display subsystem (card0) as the display subsystem with scan-out hardware. Hence my proposal to use Section "Device" with Driver "fbdev" to see that it will deliver the same output results without setting up the unusable render node screen. This only indicates that you get 2D hardware acceleration via fbdev emulation without 3D support. fbdev is like armsoc, it is also missing a submodule for 3D support. It used to be doing everything via fbdev device and hence is deprecated.

 

8 hours ago, nokirunner said:

wouldn't it be the easiest way to try  change the code by telling it to use the 3d acceleration libraries in these conditions?

Same as for armsoc, you need a submodule. But armsoc is the better choice since you can make use of full KMS/drm acceleration. Alternatively you can rewrite modesetting to not delegate everything to OpenGL and use dma_buf for buffer pass around.

 

4 hours ago, nokirunner said:

hoping these problems with the rk322x socs will be solved

This is not a problem for rk322x SOCs only. It applies to all devices that use render nodes. The less CPU power the device has, the more disadvantage the buffer pass causes.

Posted
11 hours ago, usual user said:

This is all about where the memory is located where the operations takes place. In PC world there is only one "GPU" IP. It is implementing everything. Display engine for scan-out and GPU for OpenGL. Once the GPU has rendered directly to the scan-out memory the hardware of the display subsystem outputs it to the monitor. So offloading anything on OpenGl is a good idea. It is a device independent standard and doing composition for movie video is also a fast path. There is no need to support display subsystem acceleration in the CPU area.

Ok, I think I got it: modern PC videocards/chips have 2D+3D+VPU in the same chip from the same vendor and they are happy their way because they decide what kind of buffers they need and they share by their own formats. No need to move buffers from CPU to video chip during rendering/decoding.

 

11 hours ago, usual user said:

But we are dealing with SOCs. They have several IPs where the memory they are dealing with is separated. I.e. They need to pass around memory buffers so that they can work on data that they share. The buffer format has to be identical between different IPs otherwise  you have to convert. A "dumb buffer" format is always possible but you loose acceleration features of special formats. But this requires device dependent knowledge. E.g. a display subsystem may support NV12 format for scan-out. Uploading NV12 data for compositing on the 3d GPU and then forwarding via dump buffer to the display subsystem will not improve the performance, but forwarding via dma_buf to the display subsystem will.

The impact of improper buffer pass around can be seen by the uploaded glmark2 logs. The 3d performance is decreasing cause the GPU can not be served fast enough.

This looks clear too, sharing buffers between CPU and GPU/VPU in ARM world is little more complicated because different IPs are involved (ie: 2D is from Rockchip, display subsystem from designware, 3D from ARM, VPU from hantro, etc...)

 

About the performance hit, you mean the wayland vs Xwindow logs you posted before?

 

11 hours ago, usual user said:

In the Mali proprietary case that code took care for the proper buffer pass around via the proprietary kernel interface. But that doesn't belong in the Mesa counterpart, as it only cares about OpenGL and it doesn't matter how IPs interact. It provides only buffer import and export. For mainline in xorg the submodule is the proper place. For Weston it is the drm-backend which it already has.

I think I understand it a bit better, but still I'm missing the "proprietary kernel interface" meaning.

The actual armsoc driver that works for rk322x is asking the kernel DRM to create buffers, how they are shared and why that code works and allows the Mali proprietary OpenGL ES libraries work is yet not clear to me.

 

2 hours ago, usual user said:

Same as for armsoc, you need a submodule. But armsoc is the better choice since you can make use of full KMS/drm acceleration. Alternatively you can rewrite modesetting to not delegate everything to OpenGL and use dma_buf for buffer pass around.

Is the Option "Debug" dmabuf_capable" X.org option for modesetting supposed to help with that? Experimenting with it, as suggested by Lima mesa developers too, does not seem to provide real benefits on Lima.

 

Posted
38 minutes ago, jock said:

Ok, I think I got it: modern PC videocards/chips have 2D+3D+VPU in the same chip from the same vendor and they are happy their way because they decide what kind of buffers they need and they share by their own formats. No need to move buffers from CPU to video chip during rendering/decoding.

 

This looks clear too, sharing buffers between CPU and GPU/VPU in ARM world is little more complicated because different IPs are involved (ie: 2D is from Rockchip, display subsystem from designware, 3D from ARM, VPU from hantro, etc...)

 

About the performance hit, you mean the wayland vs Xwindow logs you posted before?

 

I think I understand it a bit better, but still I'm missing the "proprietary kernel interface" meaning.

The actual armsoc driver that works for rk322x is asking the kernel DRM to create buffers, how they are shared and why that code works and allows the Mali proprietary OpenGL ES libraries work is yet not clear to me.

 

Is the Option "Debug" dmabuf_capable" X.org option for modesetting supposed to help with that? Experimenting with it, as suggested by Lima mesa developers too, does not seem to provide real benefits on Lima.

 

The driver is opensourced, see https://developer.arm.com/tools-and-software/graphics-and-gaming/mali-drivers/utgard-kernel

 

Posted
2 hours ago, jock said:

About the performance hit, you mean the wayland vs Xwindow logs you posted before?

Exactly, they were done on the same device. The buffer pass around forces the 3D GPU IP to slow down because the required buffers are not available by time. The performance hit for the display output isn't reflected by the log but by visual inspection it makes huge difference. In both cases, the 3D rendering power is sufficient to allow a flowing 60Hz display.

 

2 hours ago, jock said:

The actual armsoc driver that works for rk322x is asking the kernel DRM to create buffers, how they are shared and why that code works and allows the Mali proprietary OpenGL ES libraries work is yet not clear to me.

The DRM scan-out buffer is handed to the Mali proprietary OpenGL ES libraries and they do the buffer dance in the bloob via the Mali proprietary kernel interface. When 3D rendering is done the buffer is handed back to DRM and the scan-out takes place. This is what the submodule has to implement with the Mali rendernode (/dev/dri/renderD128)

 

2 hours ago, jock said:

Is the Option "Debug" dmabuf_capable" X.org option for modesetting supposed to help with that?

In the early days it was a security guard to protect stable installations. It made dma_buf support accessible and usable when drivers provide the support. I don't know if it is still required or meanwhile obsolete. It is still dangling around in my configurations.

Posted
On 8/28/2020 at 8:55 PM, xwiggen said:

This is only the proprietary kernel part that is already implemented in mainline via /dev/dri/renderD128. The missing functionality is how the binary bloob uses it, which must be implemented via the as yet non-existent armsoc submodule. glamor has it already but using it via modesetting is sub optimal because of KMS/drm implementation design decisions there.

Posted

@usual user

I'm reading some DRM documentation at the moment, I'd like to understand the whole thing a bit better.

In the meantime I tried also weston on debian bullseye (Mesa is 20.1.5, if I remember correctly) with kernel 5.8, performance with Mali-400 was not so exciting... glmark2-wayland crashed after the third benchmark, glmark2-es2-wayland worked in windowed mode, but freezed after the jellyfish test in fullscreen (1080p) mode.

 

I'm not confident FPS numbers are real also... in windowed mode I got a final score of ~130 FPS, but the animations were clearly not as smooth as expected with such framerate.

 

Firefox in wayland mode also is pretty unusable, very slow and choppy and also the mouse cursor became choppy during heavy load.

At the moment there is a LibreELEC patch that demotes the DRM cursor plane to an overlay plane, so no hardware cursor at the moment.

 

Posted

Inspired by you, I have also done some more tests on my site.

18 hours ago, jock said:

freezed after the jellyfish test in fullscreen (1080p) mode

For me it is also freezing. Since I am on panfrost, we can rule out lima and panfrost for this. The one we have still in common is rockchipdrm. i.MX6 is using imxdrm and is not suffering this flaw, so IMHO the display subsystem is responsible for this error.

 

18 hours ago, jock said:

glmark2-wayland crashed after the third benchmark

I don't know how mature the lima GL support in Mesa already is, so IMHO Mesa is to blame here.
But we are dealing with 2D acceleration functions of the display subsystem for Xwindow, so these errors are not relevant for our further investigations.

 

18 hours ago, jock said:

At the moment there is a LibreELEC patch that demotes the DRM cursor plane to an overlay plane

The concept of a dedicated cursor plane is gone in atomic modesetting. The plane is handled as any other plane, but the constraints of the plane are still obeyed. The selection of a suitable cursor plane will most probably select this one, but any other one can be chosen.

Posted
14 hours ago, nokirunner said:

guys, I had an idea, if we can roughly identify in a technical way the problem that this rk322x rockchip driver have whether it is the dri rockchip kernel or mesa LIMA, we could open a bugreport on the LIMA mesa git issues or on the dri (lima) kernel issues or both, maybe we are lucky and they already know how to fix it.

 

If you don't pinpoint the exact problem your issue will probably not be addressed. This also is not a real "bug", but probably a missing feature or set of features that slows things down during the pipeline.

If you remember the comments about the Allwinner A20 performance, developers made some hypothesis, but clearly there is not much time to do experiments on their side.

Posted
On 8/22/2020 at 6:19 PM, nokirunner said:

@jock here I am again, I have some interesting information that I discovered doing experiments.
if you remember, some time ago I did some experiments with xorg and acceleration media files on legacy kernels
I had "found" that xorg worked with both the "modesetting" and "armsoc" driver but found that armsoc was definitely much slower than modesetting
https://forum.armbian.com/topic/12656-csc-armbian-for-rk322x-tv-boxes/?do=findComment&comment=104900

 


so I wanted to experiment with the 5.6 kernel as well....
here's what i found:

on legacy kernel, I found that armsoc and libmali (specifically libmali-rk-utgard-400-r7p0-x11_1.7-1_armhf.deb) conflict, in fact uninstalling this library, and starting xorg with armsoc driver, x11 goes fast. obviously, however, the functionality of opengles is lost. but it does not matter. I wanted to check.
I found relevant information that could also be functional on kernel 5.6.
investigating I saw that an armsoc driver was available on the apt repositories, I tried to install it but nothing, x11 did not start.
So i tried to install xserver-xorg-video-armsoc_1.4-2_armhf.deb of the legacy kernel media pack and it works! x11 starts .. well partially, we lost the window manager, the background and the desktop icons (all black) but we see that the acceleration is active and all there !, so I tried to mix the LIMA drivers with armsoc and it seems to work, or at least glxinfo tells me that the opengl mali 400 acceleration is present.


Section "ServerFlags"
        Option  "AutoAddGPU" "off"
EndSection

Section "Device"
    Identifier  "armsoc"
    Driver      "armsoc"
EndSection


Section "OutputClass"
        Identifier "Lima"
        MatchDriver "armsoc"
        Driver "modesetting"
        Option "PrimaryGPU" "true"
EndSection



now it is evident that the armsoc driver does not work because I assume it is compiled for the legacy kernel
I also searched the net to understand why this driver works, and the one in the repository doesn't.
and that's because this version is compatible with rockchip soc!


https://github.com/paolosabatino/xf86-video-armsoc

 


Now I assume that by compiling these armsoc drivers for kernel 5.6 there is a good chance it will work correctly, and can be mixed with the LIMA 3D driver.

my limitation from trying at the moment, is that I don't know how to compile these, I've never done a cross compilation, and don't know (currently) how to do it. And I don't even know how to create installable .deb files from a build (my other current limitation)

You'll need to recompile libdrm wih rockchip support, but  looks likd rockchip-linux made some of their repos private

 

Posted

Hi. This is my experience with H96mini (CPU 3228A without sd slot).

 

Download and compile rkdeveloptool (https://github.com/rockchip-linux/rkdeveloptool)

Download rk322x_loader.bin and your favorite OS image from forum (decompress image from .xz format to .img)

 

While continuing to press the button inside AV port(use toothstick), then connect power cable to device.  Release the button insiede AV port, then connect the TV box to the PC using USB port number 1, the closest one to the edge. 

Device will boot in mask rom mode.

Run following commands:

./rkdeveloptool db rk322x_loader.bin

./rkdeveloptool wl 0x0 your_image_os.img

./rkdeveloptool rd

 

Don't poweroff device.

Connect devide to monitor or TV using HDMI.

 

Wait while device booting.

 

Good luck!

Posted
19 hours ago, xwiggen said:

You'll need to recompile libdrm wih rockchip support, but  looks likd rockchip-linux made some of their repos private

maybe, but anyway I gave up, I tried for a couple of days to "try my luck by experimenting", but I couldn't get much, the level of skills needed are well beyond my abilities

Posted
4 hours ago, nokirunner said:

but anyway I gave up, I tried for a couple of days to "try my luck by experimenting"

I gave up xorg and switched to plasma-desktop. Kwin is supporting a wayland backend and so I get a lightning fast graphics desktop with all bells and whistles. Ok the bugs at the panfrost stack still exist but this environment makes efficient use of anything that is available. Thanks to the configurability of kwin, I can have the same look and feel of my previous desktop.

Posted
56 minutes ago, usual user said:

I gave up xorg and switched to plasma-desktop. Kwin is supporting a wayland backend and so I get a lightning fast graphics desktop with all bells and whistles. Ok the bugs at the panfrost stack still exist but this environment makes efficient use of anything that is available. Thanks to the configurability of kwin, I can have the same look and feel of my previous desktop.

I think lima still has way to go for a proper KDE desktop. I tried gnome on X.org and it worked, but it was slow and crashing. But even weston is crashing often with latest stable mesa in debian bullseye and kernel 5.8.

Panfrost instead looks much more advanced and is quite usable.

Posted
2 hours ago, Reddwarf said:

Does anyone have the original firmware for the Q96 Home 4k tv-box with RK3228 cpu? Or any other suitable firmware that I can flash in maskrom mode to unbrick the box?

don't remember, does your box has NAND flash? In case the answer is yes, always remember to unplug the power everytime you reboot the box.

Posted
On 9/10/2020 at 6:55 PM, jock said:

don't remember, does your box has NAND flash? In case the answer is yes, always remember to unplug the power everytime you reboot the box.

No this box has emmc. However, I'v found that the safest (read "stay out of trouble") is to powercycle at every reboot anyway ;)

 

Posted

Updated mainline images with kernel 5.8.9

 

Finally the mainline kernel has been pushed further and it looks very promising.

Link to the new images are in the first page.

 

PS: trying the Ubuntu Focal image, I'm happy Firefox is performing decently, but had to blacklist lima module to get good 2D desktop experience and (of course) disable window compositing.

Posted
On 9/13/2020 at 3:51 PM, jock said:

Updated mainline images with kernel 5.8.9

 

Finally the mainline kernel has been pushed further and it looks very promising.

Link to the new images are in the first page.

 

PS: trying the Ubuntu Focal image, I'm happy Firefox is performing decently, but had to blacklist lima module to get good 2D desktop experience and (of course) disable window compositing.

well, the period of despair has passed, now I'm ready to throw myself back into the fray to experiment, I'm curious how firefox behaves, I'm writing this comment while the image is written on the sdcard ..
..then no LIMA, no 3D, or can we recover the 3D acceleration from the old owners drivers? ...
I suspect no, no 3d at all.
funny, that by blacklisting LIMA, you get a bit of decent 2D, it is very clear then, that the 3D drivers are going to use something wrong ... despite having noticed that LIMA's 3D worked .., when I come to the end to understand something I will be very happy.

Posted
On 9/13/2020 at 3:51 PM, jock said:

had to blacklist lima module to get good 2D desktop experience

lima is not the culprit, it is modesetting which uses it improperly.
Disable it for modesetting by:

Section "Device"
        Identifier      "KMS-1"
        Driver          "modesetting"
        Option          "AccelMethod"           "none"
EndSection

and leave lima in place so e.g. kodi-gbm and Wayland compositors can make use of it.

You should also include a stanza like this:

Section "OutputClass"
        Identifier      "dwhdmi-rockchip"
        MatchDriver     "rockchip"
        Option          "PrimaryGPU"    "TRUE"
EndSection
Section "OutputClass"
        Identifier      "Meson-IP"
        MatchDriver     "meson"
        Option          "PrimaryGPU"    "TRUE"
EndSection
Section "OutputClass"
        Identifier      "Exynos-IP"
        MatchDriver     "exynos"
        Option          "PrimaryGPU"    "TRUE"
EndSection

so that modesetting immediately selects the correct /dev/dri/cardX node for the display subsystem, without autoprobing and guessing. Haveing all stanzas for the drivers you want to support in place simultaniously does no harm because any device is equipped with one IP and only that one will match.

Posted
1 hour ago, usual user said:

lima is not the culprit, it is modesetting which uses it improperly.

Yeah I remember the discussion about modesetting and buffers bouncing between IPs.

 At the moment I'm considering to ship a default configuration of this kind:

 

Section "ServerFlags"
	Option "AutoAddGPU" "off"
	Option "Debug" "dmabuf_capable"
EndSection

Section "OutputClass"
	Identifier "rk322x"
	Driver "modesetting"
	MatchDriver "rockchip"
	Option "AccelMethod" "none"
	Option "PrimaryGPU "true"
EndSection

which, despite resorting to software rastering for 3D on X.org, has a decent 2D experience and of course still allows GBM consumers and wayland clients to work also with 3D.

 

I still have two questions:

  1. Is it really so hard to modify modesetting to provide just basic KMS acceleration but using "shareable" buffers?
  2. Trying weston, with Lima correctly working and a DRM plane for the cursor, I still have the annoying issue with the cursor being "stuck" during heavy CPU load. This kind of problem is also shared with weston on panfrost (on rk3288), and I'm really puzzled if there is something misconfigured on my side or there is still something wrong in the drivers.
Posted
4 hours ago, jock said:

At the moment I'm considering to ship a default configuration of this kind

That configuration looks sane to me.

 

4 hours ago, jock said:

just basic KMS acceleration

That is the problem, basic KMS/drm acceleration does not suffice. You need the full set to be efficient. E.g. to have also accelerated video output from the VPUs and moving an entire frame buffer for only a line scroll somewhere on the screen is very inefficient. But any drm IP has different capabilities. Take for example i.MX6, the drm IP has very little capabilities but a separate 2D GPU. And the 3D GPU renders in a format that can't be scaned out by the drm IP. It has to be converted by the 2D GPU first. This is something modesetting can't cope with and probably the reason why the armada driver exists.
I don't know why no suitable driver is available for Xorg because code already exists for Wayland drm backends. Perhaps the structure of Xorg is not suitable to implement it in a similar way, or the relevant developers have moved to Wayland already.

 

4 hours ago, jock said:

I still have the annoying issue with the cursor being "stuck" during heavy CPU load. This kind of problem is also shared with weston on panfrost (on rk3288), and I'm really puzzled if there is something misconfigured on my side or there is still something wrong in the drivers.

I had to switch to kernel 5.9.0-rc5 with some panfrost patches from linux-next on top. For Mesa I'm using current master branch. With this in place I get a flawless working. The only issue left is some sort of memory pressure on heavy GPU use:

Spoiler

[335437.771871] panfrost_gem_shrinker_scan: 4 callbacks suppressed
[335437.771877] Purging 524288 bytes
[335440.144408] Purging 4194304 bytes
[335440.170420] Purging 4325376 bytes
[335440.206017] Purging 655360 bytes
[335440.224655] Purging 4337664 bytes
[335442.905745] Purging 528384 bytes
[335469.221833] Purging 585728 bytes
[335469.230491] Purging 4337664 bytes
[335469.291804] Purging 4194304 bytes
[335469.314880] Purging 4583424 bytes
[335470.128892] Purging 524288 bytes
[335470.141290] Purging 610304 bytes
[335470.151784] Purging 528384 bytes
[335470.160796] Purging 4194304 bytes
[335470.334194] Purging 720896 bytes
[335471.603038] Purging 675840 bytes
[335475.871009] panfrost_gem_shrinker_scan: 3 callbacks suppressed
[335475.871016] Purging 540672 bytes
[335475.884016] Purging 540672 bytes
[335475.895508] Purging 524288 bytes
[335475.904032] Purging 524288 bytes
[335475.916646] Purging 548864 bytes
[335475.924438] Purging 540672 bytes
[335475.932830] Purging 544768 bytes
[335475.937817] Purging 548864 bytes
[335476.615346] Purging 540672 bytes
[335476.620240] Purging 540672 bytes
[335489.987048] panfrost_gem_shrinker_scan: 13 callbacks suppressed
[335489.987051] Purging 610304 bytes
[335489.995387] Purging 524288 bytes
[335490.005181] Purging 4620288 bytes
[335490.016824] Purging 4493312 bytes
[335490.082926] Purging 659456 bytes
[335490.134630] Purging 528384 bytes
[335512.840731] Purging 524288 bytes

 

But development is a moving target and I see this which looks somehow related.

Posted
21 hours ago, usual user said:

That is the problem, basic KMS/drm acceleration does not suffice. You need the full set to be efficient. E.g. to have also accelerated video output from the VPUs and moving an entire frame buffer for only a line scroll somewhere on the screen is very inefficient. But any drm IP has different capabilities. Take for example i.MX6, the drm IP has very little capabilities but a separate 2D GPU. And the 3D GPU renders in a format that can't be scaned out by the drm IP. It has to be converted by the 2D GPU first. This is something modesetting can't cope with and probably the reason why the armada driver exists.
I don't know why no suitable driver is available for Xorg because code already exists for Wayland drm backends. Perhaps the structure of Xorg is not suitable to implement it in a similar way, or the relevant developers have moved to Wayland already.

I understand. Do you know what are the capabilities of the rockchip/designware DRM IP? How is this linked to the VOPs? Do RGA, which I guess is the 2D GPU for rockchip, has any role in this and currently do any acceleration for pixel format conversion?

Ah so many questions around the DRM subsystem, way too complex for a newbie :unsure:

 

22 hours ago, usual user said:

I had to switch to kernel 5.9.0-rc5 with some panfrost patches from linux-next on top. For Mesa I'm using current master branch. With this in place I get a flawless working. The only issue left is some sort of memory pressure on heavy GPU use:

Yeah, this is a longstanding problem, I hope will be address sooner or later

Posted

btw. how about a wayland compositor and xwayland in it to run x apps - i was surprised recently when i played around with it in another context on a raspberry pi (running 64bit ubuntu and using weston as compositor) that an opengl app running in xwayland that way was 3d hw accelerated via the vc4 mesa driver ...

Posted
4 hours ago, jock said:

Do you know what are the capabilities of the rockchip/designware DRM IP?

For the details you have to read the TRM (e.g.) of the specific device.

 

4 hours ago, jock said:

VOP

VOP is how Rockchip is calling the display subsystem components.

 

4 hours ago, jock said:

Do RGA, which I guess is the 2D GPU for rockchip, has any role in this and currently do any acceleration for pixel format conversion?

It is as mem2mem exposed and is e.g. usable for hardware accelerated video scaling in video pipelines.

 

2 hours ago, hexdump said:

how about a wayland compositor and xwayland in it to run x apps

Sorry, you are to late in the game ;)

 

On 9/10/2020 at 5:52 PM, usual user said:

I gave up xorg and switched to plasma-desktop. Kwin is supporting a wayland backend and so I get a lightning fast graphics desktop with all bells and whistles.

With my current setup, as soon as the memory pressure issue is resolved, I am feature complete.

Posted (edited)

Hello

I want to install a tvheadend-server (vdr) on my huawei ec6108 v9e(rk3228b) with a usb-dvb-stick  device - Geniatech T230C(officially support by kernel).Unfortunately it doesn't work. Seemed some kernel module or driver/firmware related with dvb-usb  not being load properly.Here is log of dmesg and lsmod.

dmesg log

[  149.170716] usb 1-1: new high-speed USB device number 2 using dwc2
[  149.647638] usb 1-1: New USB device found, idVendor=0572, idProduct=c68a, bcd  Device= 8.00
[  149.647663] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[  149.647679] usb 1-1: Product: EyeTV Stick
[  149.647693] usb 1-1: Manufacturer: Geniatech
[  149.647707] usb 1-1: SerialNumber: 161206
[  149.922662] usb 1-1: dvb_usb_v2: found a 'MyGica Mini DVB-T2 USB Stick T230C v2' in warm state
[  149.923165] usb 1-1: dvb_usb_v2: will pass the complete MPEG2 transport stream to the software demuxer
[  149.923197] dvbdev: DVB: registering new adapter (MyGica Mini DVB-T2 USB Stick T230C v2)
[  149.923212] usb 1-1: media controller created
[  149.924595] dvbdev: dvb_create_media_entity: media entity 'dvb-demux' registered.
[  149.938485] usbcore: registered new interface driver dvb_usb_dvbsky
lsmod log

Module                  Size  Used by
dvb_usb_dvbsky         20480  0
zstd                   16384  4
snd_soc_hdmi_codec     16384  1
snd_soc_simple_card    20480  0
snd_soc_simple_card_utils    20480  1 snd_soc_simple_card
gpio_ir_recv           16384  0
cpufreq_dt             16384  0
hantro_vpu             73728  0
rockchip_vdec          77824  0
dw_hdmi_i2s_audio      16384  0
v4l2_h264              16384  2 rockchip_vdec,hantro_vpu
videobuf2_vmalloc      16384  1 hantro_vpu
rockchip_rga           20480  0
videobuf2_dma_sg       20480  1 rockchip_rga
videobuf2_dma_contig    20480  2 rockchip_vdec,hantro_vpu
videobuf2_memops       16384  3 videobuf2_dma_sg,videobuf2_dma_contig,videobuf2_vmalloc
v4l2_mem2mem           24576  3 rockchip_vdec,hantro_vpu,rockchip_rga
lima                   49152  0
videobuf2_v4l2         24576  4 rockchip_vdec,hantro_vpu,v4l2_mem2mem,rockchip_rga
videobuf2_common       40960  5 rockchip_vdec,hantro_vpu,v4l2_mem2mem,videobuf2_v4l2,rockchip_rga
gpu_sched              28672  1 lima
rockchip_thermal       24576  0
snd_soc_rockchip_i2s    16384  2
snd_soc_rockchip_pcm    16384  1 snd_soc_rockchip_i2s
zram                   28672  2
snd_soc_core          155648  5 snd_soc_rockchip_i2s,snd_soc_hdmi_codec,snd_soc_simple_card_utils,_rockchip_pcm,snd_soc_simple_card
snd_pcm_dmaengine      16384  1 snd_soc_core
snd_pcm                86016  3 snd_pcm_dmaengine,snd_soc_hdmi_codec,snd_soc_core
snd_timer              28672  1 snd_pcm
rk_crypto              24576  0
dw_wdt                 16384  0
snd                    53248  4 snd_soc_hdmi_codec,snd_timer,snd_soc_core,snd_pcm
soundcore              16384  1 snd
sch_fq_codel           20480  2
ip_tables              24576  0
gpio_keys              20480  0
 

 

Edited by megaduo
Posted
On 9/19/2020 at 8:27 PM, usual user said:

I had to switch to kernel 5.9.0-rc5 with some panfrost patches from linux-next on top. For Mesa I'm using current master branch. With this in place I get a flawless working. The only issue left is some sort of memory pressure on heavy GPU use:

Tried both weston on gnome-wayland using kernel 5.8.10 and freshly compiled mesa 20.3.0-devel but choppy mouse is still there :unsure:

Posted
On 9/21/2020 at 2:51 PM, megaduo said:

Hello

I want to install a tvheadend-server (vdr) on my huawei ec6108 v9e(rk3228b) with a usb-dvb-stick  device - Geniatech T230C(officially support by kernel).Unfortunately it doesn't work. Seemed some kernel module or driver/firmware related with dvb-usb  not being load properly.Here is log of dmesg and lsmod.

 

[...cut...]

I don't see any error in your dmesg log (please put your logs in a "spoiler" section), but if you think there is a missing module I can enable it. Maybe you are missing the firmware for the DVB USB stick.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines