Jump to content

grunlab

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by grunlab

  1. Thank you SteeMan, Igor New image Armbian_23.02.3_Odroidxu4_bullseye_current_5.4.239_minimal.img.xz is now working perfectly.
  2. There is effectively an issue with this image (containing a qemu core file at the root): https://mirrors.dotsrc.org/armbian-dl/odroidxu4/archive/Armbian_23.02.1_Odroidxu4_bullseye_current_5.4.232_minimal.img.xz I've rebuilt the image myself (https://github.com/armbian/build#getting-started) and the new built image is working like a charm (boot : OK, installing OS on SATA : OK) Waiting the image to be updated on Armbian website, people facing the same issue can get the image here: https://dl.grunlab.net/armbian/
  3. Hi, I've several Odroid HC1 boards (forming a kubernetes cluster of 8 nodes) currently running Armbian Buster Minimal image. I would like to upgrade the cluster (node by node) to Armbian Bullseye Minimal but I faced the following issue when I tried to reinstall the 1st node: - Used image: https://mirrors.dotsrc.org/armbian-dl/odroidxu4/archive/Armbian_23.02.1_Odroidxu4_bullseye_current_5.4.232_minimal.img.xz - Image flashed on SD card using BalenaEtcher - SD card plugged into the board + power up --> the OS is not booting (and not answering to ping) Here are the different tries I did: - I've tried to flash the image several times even using different SD cards and boards --> same issue, unable to boot - I've tried a rollback and flash the previous version Armbian Buster minimal --> OK, able to boot (so I don't think the issue is at SD card or board level) - I've tried to flash the Armbian Bullseye non-minimal image (https://mirrors.dotsrc.org/armbian-dl/odroidxu4/archive/Armbian_23.02.1_Odroidxu4_bullseye_current_5.4.232.img.xz) --> with this image, I'm able to boot but after installing the OS on the SATA drive (using armbian-config : System / Install / Boot on SD, system on SATA), the OS is not booting anymore. Unfortunately, I don't have an USB UART adapter to provide logs. I'm suspecting an issue at image level as: After flashing the image on the SD card, I noticed that a file "qemu_fstype_20230226-211008_10215.core" is present at the root of the SD card ... maybe a sign showing that the image build failed or something like this !? Thank you for your support. I would be happy to provide additional information if needed. Adrien
  4. FYI: - I've reinstalled one of the cluster node using the last image Armbian_20.02.1_Odroidxu4_buster_current_5.4.19_minimal.img: adrien@bastion:~ $ kc get node worker-03 -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-03 Ready <none> 11m v1.17.3 192.168.0.106 <none> Debian GNU/Linux 10 (buster) 5.4.19-odroidxu4 docker://19.3.8 - pids cgroup is enabled on this image: adrien@worker-03:~$ cat /proc/cgroups | grep pids pids 3 97 - No more warning at kubernetes logs level and the node looks stable for the moment :-) I think this topic can be closed now.
  5. ok, ok ... no problem :-) I'm living without this kernel feature for a while and i didn't notice real impact at k8s level except those warning in the logs ... I will try to investigate on my side ...
  6. Are you sure the normal stable image i've downloaded for the test was built with the pids cgroup enabled in the kernel config file ? - pids cgroup has been enabled in the kernel config the 2019/12/13: https://github.com/armbian/build/commit/755388147d65a12d39b31898be29434f999700c8#diff-c793c3265bc32ca486ae790ba3c0ca59 - but a rollback of the config has been done the 2020/01/04: https://github.com/armbian/build/commit/c1fbf0ab087e1be71a4b8edcf4736f5a6ffe211a#diff-c793c3265bc32ca486ae790ba3c0ca59 So this is certainly explaining why the pids cgroup was not enabled in the image i've downloaded ! ... no ? FYI, i'm currently trying (first time for me !) to build an image following this doc: https://docs.armbian.com/Developer-Guide_Building-with-Docker/ I will let you know ...
  7. I've downloaded the latest stable image based on kernel 4.14.y ... this one : https://dl.armbian.com/odroidxu4/archive/Armbian_19.11.6_Odroidxu4_buster_legacy_4.14.161.7z (build date: 2020/01/04) and reinstalled one of the cluster node (worker-02): pids cgroup looks not enabled : adrien@worker-02:~$ uname -a Linux worker-02 4.14.161-odroidxu4 #40 SMP PREEMPT Sat Jan 4 12:41:15 CET 2020 armv7l GNU/Linux adrien@worker-02:~$ cat /proc/cgroups #subsys_name hierarchy num_cgroups enabled cpuset 2 14 1 cpu 7 97 1 cpuacct 7 97 1 blkio 8 97 1 memory 6 203 1 devices 5 97 1 freezer 3 14 1 net_cls 4 14 1 net_prio 4 14 1 adrien@bilbon:~/git/k8s$ kubectl get events -n default | grep worker-02 | grep pids 14s Warning FailedNodeAllocatableEnforcement node/worker-02 Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: failed to find subsystem mount for required subsystem: pids Did i miss something ?
  8. Hi Igor, belegdol ... happy new year to both of you ! I see that a testing version of Armbian buster based on kernel 5.4.y is now available. I have a spare node in the cluster that can be used to test this image. Before reinstalling the node, could you check/confirm if pids cgroup is enabled on this testing image ?
  9. Hi Igor, Belegdol, If i can help at some point (testing something or whatever), don't hesitate to ask me ! Regards, Adrien
  10. Ok, thank you for this clarification. I prefer to not use the beta repo and wait a bit more ... any idea when the update will be pushed to the stable repo ?
  11. Hi Igor, The update is still not reflected into the repo ... is it normal ? adrien@master-01:~$ uname -a Linux master-01 4.14.150-odroidxu4 #1 SMP PREEMPT Mon Oct 28 07:56:57 CET 2019 armv7l GNU/Linux adrien@master-01:~$ date Mon Dec 16 20:50:42 CET 2019 adrien@master-01:~$ sudo apt update Hit:1 http://security.debian.org buster/updates InRelease Hit:3 http://repo.zabbix.com/zabbix/4.4/raspbian buster InRelease Hit:4 http://httpredir.debian.org/debian buster InRelease Hit:5 http://httpredir.debian.org/debian buster-updates InRelease Hit:6 http://httpredir.debian.org/debian buster-backports InRelease Hit:2 https://packages.cloud.google.com/apt kubernetes-xenial InRelease Hit:8 https://download.docker.com/linux/debian buster InRelease Hit:7 https://apt.armbian.com buster InRelease Reading package lists... Done Building dependency tree Reading state information... Done All packages are up to date. Thank you, Regards, Adrien
  12. Hi Igor, Great :-) ... thank you so much ! Do you have an idea when the update will be available into the armbian repo ? Regards, Adrien
  13. Hi Armbian Support Team, I'm running a Kubernetes cluster on top of 8 odroid HC1 cards (3 master nodes & 5 workers nodes) - Armbian used version is : adrien@bilbon:~/git/k8s$ kc get node master-01 -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master-01 Ready master 44d v1.17.0 192.168.0.101 <none> Debian GNU/Linux 10 (buster) 4.14.150-odroidxu4 docker://19.3.5 - I'm currently observing those following events at Kubernetes level: adrien@bilbon:~/git/k8s$ kc get events --all-namespaces NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE default 4m55s Warning FailedNodeAllocatableEnforcement node/master-01 Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: failed to find subsystem mount for required subsystem: pids default 2m5s Warning FailedNodeAllocatableEnforcement node/master-02 Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: failed to find subsystem mount for required subsystem: pids default 4m35s Warning FailedNodeAllocatableEnforcement node/master-03 Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: failed to find subsystem mount for required subsystem: pids default 18s Warning FailedNodeAllocatableEnforcement node/worker-01 Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: failed to find subsystem mount for required subsystem: pids ... - After a quick googling, it looks that the issue is linked to the fact that PIDS cgroup is not enabled at Armbian kernel level. Effectively, it is the case ... no pids cgroup found: adrien@master-01:~$ cat /proc/cgroups #subsys_name hierarchy num_cgroups enabled cpuset 5 29 1 cpu 2 69 1 cpuacct 2 69 1 blkio 8 69 1 memory 4 125 1 devices 3 69 1 freezer 7 29 1 net_cls 6 29 1 net_prio 6 29 1 - This issue has also been reported on RaspberryPI in the same Kubernetes context ... this issue has been solved by enabling PIDs cgroup at Raspbian kernel level: https://github.com/raspberrypi/linux/pull/2968 So, could it be possible to enable PIDs cgroup at kernel level in the next release of Armbian ? Thank you for your support, Regards Adrien
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines