Jump to content

gounthar

Members
  • Posts

    424
  • Joined

  • Last visited

Profile Information

  • Gender
    Male
  • Location
    Seclin, France
  • Interests
    ARM

Contact Methods

  • Website URL
    https://Bruno.verachten.fr
  • Github
    https://github.com/gounthar

Recent Profile Visitors

7987 profile views
  1. Thanks! May I ask you what kind of NVMe drive you have installed?
  2. Hello there, 👋 I've connected an NVMe drive that I used in my PC to the Banana Pi F3, but it's not being detected. I'm using a USB-C PD power supply (Pine64 PinePower Desktop) to ensure sufficient power, yet it's still not recognized. The drive might be faulty. Has anyone successfully used an NVMe drive with this board? If so, could you share which NVMe drives are known to work with it? Thanks! _ _ _ /_\ _ _ _ __ | |__(_)__ _ _ _ / _ \| '_| ' \| '_ \ / _` | ' \ /_/ \_\_| |_|_|_|_.__/_\__,_|_||_| v25.5.1 for BananaPi BPI-F3 running Armbian Linux 6.6.87-current-spacemit Packages: Debian rolling (trixie) IPv4: (LAN) 192.168.1.185, 192.168.1.83 (WAN) 82.65.177.146 IPv6: 2a01:e0a:5ed:6230:fcfe:feff:fef8:f4be, 2a01:e0a:5ed:6230:3718:a97e:8a7b:a2c6 (WAN) 2a01:e0a:5ed:6230:9f8a:b5c1:f849:3f6f Performance: Load: 2% Up time: 1:31 Memory usage: 2% of 15.51G CPU temp: 45°C Usage of /: 12% of 113G Commands: Configuration : armbian-config Monitoring : htop [...] # Check PCIe-related kernel messages sudo dmesg | grep -E "(pcie|nvme|k1x-dwc)" # Check PCIe power management cat /sys/module/nvme_core/parameters/default_ps_max_latency_us # Monitor PCIe during boot sudo journalctl -b | grep -i pcie [ 0.404937] k1x-dwc-pcie ca400000.pcie: has no power on gpio. [ 0.407218] k1x-dwc-pcie ca400000.pcie: host bridge /soc/pcie@ca400000 ranges: [ 0.407260] k1x-dwc-pcie ca400000.pcie: IO 0x009f002000..0x009f101fff -> 0x009f002000 [ 0.407280] k1x-dwc-pcie ca400000.pcie: MEM 0x0090000000..0x009effffff -> 0x0090000000 [ 0.507413] k1x-dwc-pcie ca400000.pcie: iATU: unroll T, 8 ob, 8 ib, align 4K, limit 4G [ 1.507664] k1x-dwc-pcie ca400000.pcie: Phy link never came up [ 1.508176] k1x-dwc-pcie ca400000.pcie: PCI host bridge to bus 0001:00 [ 1.514835] pcieport 0001:00:00.0: PME: Signaling with IRQ 65 [ 1.515235] pcieport 0001:00:00.0: AER: enabled with IRQ 65 [ 1.515884] k1x-dwc-pcie ca800000.pcie: has no power on gpio. [ 1.518096] k1x-dwc-pcie ca800000.pcie: host bridge /soc/pcie@ca800000 ranges: [ 1.518126] k1x-dwc-pcie ca800000.pcie: IO 0x00b7002000..0x00b7101fff -> 0x00b7002000 [ 1.518150] k1x-dwc-pcie ca800000.pcie: MEM 0x00a0000000..0x00afffffff -> 0x00a0000000 [ 1.518164] k1x-dwc-pcie ca800000.pcie: MEM 0x00b0000000..0x00b6ffffff -> 0x00b0000000 [ 1.618263] k1x-dwc-pcie ca800000.pcie: iATU: unroll T, 8 ob, 8 ib, align 4K, limit 4G [ 2.618384] k1x-dwc-pcie ca800000.pcie: Phy link never came up [ 2.618513] k1x-dwc-pcie ca800000.pcie: PCI host bridge to bus 0002:00 [ 2.625055] pcieport 0002:00:00.0: PME: Signaling with IRQ 69 [ 2.625444] pcieport 0002:00:00.0: AER: enabled with IRQ 69 [ 8.000939] systemd[1]: Starting modprobe@nvme_fabrics.service - Load Kernel Module nvme_fabrics... [ 8.348576] systemd[1]: modprobe@nvme_fabrics.service: Deactivated successfully. [ 8.349624] systemd[1]: Finished modprobe@nvme_fabrics.service - Load Kernel Module nvme_fabrics. [ 9.227876] systemd[1]: nvmefc-boot-connections.service - Auto-connect to subsystems on FC-NVME devices found during boot was skipped because of an unmet condition check (ConditionPathExists=/sys/class/fc/fc_udev_device/nvme_discovery). 100000 juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca400000.pcie: has no power on gpio. juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca400000.pcie: host bridge /soc/pcie@ca400000 ranges: juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca400000.pcie: IO 0x009f002000..0x009f101fff -> 0x009f002000 juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca400000.pcie: MEM 0x0090000000..0x009effffff -> 0x0090000000 juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca400000.pcie: iATU: unroll T, 8 ob, 8 ib, align 4K, limit 4G juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca400000.pcie: Phy link never came up juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca400000.pcie: PCI host bridge to bus 0001:00 juil. 04 19:00:44 bananapif3 kernel: pcieport 0001:00:00.0: PME: Signaling with IRQ 65 juil. 04 19:00:44 bananapif3 kernel: pcieport 0001:00:00.0: AER: enabled with IRQ 65 juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca800000.pcie: has no power on gpio. juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca800000.pcie: host bridge /soc/pcie@ca800000 ranges: juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca800000.pcie: IO 0x00b7002000..0x00b7101fff -> 0x00b7002000 juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca800000.pcie: MEM 0x00a0000000..0x00afffffff -> 0x00a0000000 juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca800000.pcie: MEM 0x00b0000000..0x00b6ffffff -> 0x00b0000000 juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca800000.pcie: iATU: unroll T, 8 ob, 8 ib, align 4K, limit 4G juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca800000.pcie: Phy link never came up juil. 04 19:00:44 bananapif3 kernel: k1x-dwc-pcie ca800000.pcie: PCI host bridge to bus 0002:00 juil. 04 19:00:44 bananapif3 kernel: pcieport 0002:00:00.0: PME: Signaling with IRQ 69 juil. 04 19:00:44 bananapif3 kernel: pcieport 0002:00:00.0: AER: enabled with IRQ 69 poddingue@bananapif3 ~ $ # Enable all available kernel modules for NVMe sudo modprobe nvme sudo modprobe nvme-core sudo modprobe nvme-pci # Check if modules loaded lsmod | grep nvme modprobe: FATAL: Module nvme-pci not found in directory /lib/modules/6.6.87-current-spacemit nvme_fabrics 118784 0
  3. Thanks a lot for your help, Igor. That did not make the trick, as the available packages are still in the old form (docker.io instead of docker-ce for example. Nonetheless, I now have a working (but old) docker binary, which is enough for my use case. Now, I will try to build a more recent docker binary. The next problem to solve will be to get the NVMe to work. I'm sorry your F3 died.
  4. I tried to install docker via armbian-config --CON02 on Trixie but I got quite a lot of errors during the installation. Here is the board information if it ever can help: https://paste.armbian.com/xozimixeqi . Thanks.
  5. Thank you so much. 🤗 Let's buy the board, then.
  6. Hello there, I recently gained remote access to a Banana Pi F3 to test it, and everything went smoothly. The board was running Bianbu Linux, and I was able to install and use Docker without issues. Now, I'm considering purchasing the board and was wondering if the Armbian image would also support running Docker. Do you have any insights on this? Thanks!
  7. Time to find my UART2USB adapters I guess. 😁 I tried the linked images in the downloads section, and not much happens.🤔
  8. If I were to try a SoQuartz blade/Quartz64 these days, where could I find a recent image? I haven't been able to find any in the download page. Or should I use another image that would work for this model? Thanks.
  9. I'm happy to see the support for this board is progressing. 🤗 @tiziano000, where did you find an adapter to connect to the GPIO pins? I have an SPI screen that I can't connect for the time being because of the strange (to me) spacing of the GPIO pins. 🤷‍♂️
  10. Hello there, I'm trying to get an i2c ssd1306 screen to work with my Radxa Zero, without much luck for the time being. https://paste.armbian.com/cirulemeho I found two i2c devices, 3 and 5. sudo i2cdetect -y -r 3 0 1 2 3 4 5 6 7 8 9 a b c d e f 00: -- -- -- -- -- -- -- -- 10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 20: -- -- UU -- -- -- -- -- -- -- -- -- -- -- -- -- 30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 70: -- -- -- -- -- -- -- -- sudo i2cdetect -y -r 5 0 1 2 3 4 5 6 7 8 9 a b c d e f 00: -- -- -- -- -- -- -- -- 10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 30: 30 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 70: -- -- -- -- -- -- -- -- I think the 5 is for HDMI, right? Would the UU for the 3 mean my screen is not correctly connected or detected? Thanks.
  11. I also have a cgroup problem, on a more recent version: lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Armbian 23.11.2 bookworm Release: 12 Codename: bookworm uname -a Linux k3s-control-plane 6.1.68-current-bcm2711 #1 SMP PREEMPT Sun Dec 17 11:40:38 UTC 2023 aarch64 GNU/Linux déc. 29 21:30:12 k3s-control-plane k3s[3689]: I1229 21:30:12.173705 3689 server.go:156] Version: v1.28.4+k3s2 déc. 29 21:30:12 k3s-control-plane k3s[3689]: I1229 21:30:12.174299 3689 server.go:158] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" déc. 29 21:30:12 k3s-control-plane k3s[3689]: time="2023-12-29T21:30:12+01:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.1.138:6443 -t ${SERVER_NO> déc. 29 21:30:12 k3s-control-plane k3s[3689]: time="2023-12-29T21:30:12+01:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token" déc. 29 21:30:12 k3s-control-plane k3s[3689]: time="2023-12-29T21:30:12+01:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.1.138:6443 -t ${AGENT_NODE_> déc. 29 21:30:12 k3s-control-plane k3s[3689]: time="2023-12-29T21:30:12+01:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml" déc. 29 21:30:12 k3s-control-plane k3s[3689]: time="2023-12-29T21:30:12+01:00" level=info msg="Run: k3s kubectl" déc. 29 21:30:12 k3s-control-plane k3s[3689]: time="2023-12-29T21:30:12+01:00" level=fatal msg="failed to find memory cgroup (v2)" déc. 29 21:30:12 k3s-control-plane systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE I have modified the /boot/firmware/config.txt and rebooted, but it was not enough... cat /boot/firmware/config.txt # For more options and information see # http://rptl.io/configtxt # Some settings may impact device functionality. See link above for details # For k3s (see https://github.com/k3s-io/k3s/issues/970#issuecomment-548409262) cgroup_memory=1 cgroup_enable=memory
  12. Hi there, Has anyone tried openRGB with armbian? I found a few references to Raspberry Pi and openRGB, but nothing really usable. I would like to build a Kubernetes cluster and use an RGB fan to display the status of the cluster (and of course try to cool it down). Thanks.
  13. Where did you find the image? Thanks.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines