usual user

Members
  • Content Count

    33
  • Joined

About usual user

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Oh, this reminds me uboot has a support problem with keyboard on the OTG USB port. Try the other one, should be the lower one. The timeout is in 1/10sec, i.e. timeout 100 == 10 seconds. It is not really good news. It confirms my initial suspicion that the boot.scr magic is the culprit. Looking in the boot.src shows even a hard coded quadcore DTB name without counterpart for Solo/DualLite. I don't have much interest at legacy boot.scr, so I've reached a point where I can't help much anymore. The armbian boot.scr maintainer has now to jump in to find out how to fix this. If this doesn't happen, you are already an advanced distro-boot user and can keep using it. At least we have narrowed down what has to be fixed.
  2. OK, this confirms the boot.src magic is really not working. To rule out my kernel command line modification did the trick, fire up your favorite editor and modify the parameters in extlinux.conf. Or alternatively replace your extlinux.conf by the attached one. There I added a second boot stanza that boots with the same kernel command line parameters as boot.scr. Additionally in the first stanza I have reset the kernel loglevel like the one it is used by boot.scr. This will present the black screen a little longer as no kernel logging will be presented but the boot time should stay the same. No one wants log output cause it could help identifying issues . Last but not least I increased the "Enter choice:"-timeout to give you some more time for your reaction. After the timeout the first stanza is chosen automatically. Boot the image and at the "Enter choice:"-prompt type the "2" and "ENTER" keys to select the boot stanza with the boot.src kernel command-line parameters. If this boots for you then the influence of kernel command line parameters can be excluded. If the boot.scr method had worked, the boot behavior would have been the exact same as with distro-boot now. It is due to the boot scheme that armbian uses. Generic Fedora uses a similar one, you can see it if you choose the second boot stanza when booting. But you need to be fast because the "Enter Choice:" timeout is only two seconds there. For a little more background start reading here. This is because the existence of /boot/extlinux/extlinux.conf changes the boot method to distro-boot and boot.scr is no longer taken into account. For distro-boot only the kernel, the DTB and an optional initramfs is required. Only your Preferred Editor is required for administration, as extlinux.conf is just a plain text file. No tinkering with special uboot tools. This isn't a workaround it is simply a different boot method with uboot. extlinux.conf
  3. Huch!! What have I done? I had not expected this would lead to a working system. My intention was to only increase the loglevel. Anyway, let's recap what I have modified. - Switched from legacy boot.scr to distro-boot (extlinux.conf) as the boot feedback is more informative - Increased the kernel loglevel to 7 - Used PARTUUID as root partition identifier - Enabled instant fbcon activation - Revised some boot parameters The black periods are probably due to the performence of the single core SOC while the initramdisk is loaded or the kernel is initialized. I don't think the kernel command line modifications make much difference. My main suspicion is that the boot.scr magic is the culprit and the change to distro-boot makes the difference. Have you waited long enough with the unchanged image to boot, as the black period would be quite long without log feedback? We can now take back one change after another to identify what the real difference is. Offtopic: quote "PS2: with a remote login a lxqt from my Ubuntu laptop, I have a normal QWERTY setup." Out of interest does the lxqt desktop not working on the local cubox-i display? You'll probably have to wait quite a while because a single core SOC takes a while to load everything.
  4. OK, Armbian_20.02.7_Cubox-i_bionic_current_5.4.28_desktop.7z downloaded and played a little with it on my quadcore cubox-i. It is working for me like as for igor. As logging is suppressed as much as possible let's try to increase it in a next step. To do this create in the boot directory a new directory named "extlinux" and copy the attached extlinux.conf file in. Boot the image and report the logging. This is a "WARNING: CPU: 0 PID: 1 at arch/arm/mm/dump.c:248 note_page+0x160/0x324 arm/mm: Found insecure W+X mapping at address 0xe0839000". I also get it since some time with recent kernels but was to lazy to track it down and it seem to do no harm. extlinux.conf
  5. This option (# CONFIG_SOC_IMX6SLL is not set) is not set for the kernel in the Fedora Image also. To work out what is missing I need the dmesg output I asked for to see the details of the device.
  6. Yes. The image is based on stock Fedora Minimal with some additional packages installed, some cubox-i optimizing configurations applied and some personal preferences set up. It is showcasing what is possible with pure mainline to my knowledge. This confirms Mainline works as expected if it is configured properly. No workarounds like forcing a specific DTB are required. Provide the exact Armbian image name you want to make working, so I am able to take a look what is may be missing. Also please start up Fedora again and attach the file of "dmesg > dmesg.txt" to your next post.
  7. So, for a quick test I uploaded an image that is configured for cubox-i here. Put it on a microSD with "xz -dcv Fedora-Minimal-armhfp-31-1.9-trial.raw.xz > /dev/sdX" "/dev/sdX" has to be replaced by the device where the microSD resides while the transfer. When it works for you properly I know pure mainline support is working and we can start to figure what is missing for armbian.
  8. I'm a long time Fedora cubox-i user and all support is basically in mainline available. Out of curiosity I'm wondering why it is not working for debian derived userspace also. As I do not have a single core device, are you interested to try a Fedora image to see if it is working for you? If it is working we may be able to work out what is missing for armbian. You just need to know how to manage files in the rootfs and modify configuration files with your favorite editor. I.e. basic linux knowledge.
  9. That was what I am looking for to see if zero copy will work. But 0001-avutil-add-av_buffer_pool_flush.patch 0002-Add-common-V4L2-request-API-code.patch 0003-Add-V4L2-request-API-mpeg2-hwaccel.patch 0004-Add-V4L2-request-API-h264-hwaccel.patch 0005-Add-V4L2-request-API-hevc-hwaccel.patch 0006-Add-V4L2-request-API-vp8-hwaccel.patch 0007-Add-and-use-private-linux-headers-for-V4L2-request-A.patch 0008-hwcontext_drm-do-not-require-drm-device.patch 0009-avcodec-h264-parse-idr_pic_id.patch 0010-avcodec-h264-parse-ref_pic_marking_size_in_bits-and-.patch 0011-HACK-add-dpb-flags-for-reference-usage-and-field-pic.patch 0012-WIP-v4l2-request-rolling-timestamps.patch do not apply cleanly against current master: and the build fails later on with an error. I guess at least a rebase is required. Oh, by the way, it is "--enable-libudev". "--enable-udev" fails as unknown.
  10. According to the commit log there should be much more, e.g. see here. Better check with "v4l2-ctl --device=/dev/videoX --all". X has to be replaced by the proper number. Oh, wait, now you're talking about encoding, whereas before it was about decoding.
  11. For all mainline v4l2 supported devices hardware accelerated video pipelines already work with the gstreamer1 framework. I.e. using a gstreamer based media player will make use of v4l2 codecs and other components like e.g scalers. The mainline ffmpeg unfortunately does not have full v4l2 support so far. There is this patch set where several patches already landed in ffmpeg master but some have still comments and don't apply any longer cleanly. Thus, all user space programs that rely on ffmpeg codec support will not be able to use the v4l2 hardware acceleration correctly. In SOC world the GPU is usually not much involved at video pipelines. GPU's are for graphic acceleration while VPU's are the video accelerators. And those ones are nowadays exposed by mainline kernel via v4l2 with dmabuf support for zero copy.
  12. OK, now that you described your network topology in more detail it is obvious what is going on. Your router does not connect the WiFi and Ethernet segment in bridged mode but handles them as two separate segments where routing takes place and firewall rules get applied. Ports for service discovery (mdns) and LPR (printer) seem to be enabled already but for ipp-everywere or aiprint (ipp) are not. With all properly setup and cups-browsed from the cups-filters package running there should be no user intervention be required to add the printer. cups-browsed will discover it, setup a suitable queue and in applications it will show up as printer to use.
  13. Now you know you have a proper working CUPS. As hubs, switches or bridges usually do not apply any firewall rules if the devices are in the same network segment, there is nothing you should blame your router for. Having *no* print spool service on a device is quite common and rightfully a firewall does not open any ports for any specific print communication by default. Service discovery has primary nothing to do with print protocols. It is e.g. done by zero-conf (avahi) using different ports. And as it is being used by several services it is quite common a firewall has the involved ports open by default. To check if the firewall on the OPi is the culprit, temporary disable it entirely and try printing via network again.
  14. As can be seen from the log (Executing backend \"/usr/lib/cups/backend/dnssd\"...) the cups backend is been started over and over again without succeeding in transfer any data to the printer. Maybe e.g. some firewall rules or something else network related is not setup properly. To approve this, connect the printer via USB and check if printing is working then.
  15. First step would be to enable "Save debugging information for troubleshooting" in the administration tab and inspect the log files.