Jump to content

Debootstrap base system second stage failed


initBasti

Recommended Posts

Hello,

 

I faced an issue yesterday, while trying to create a custom image with debian bullseye, a specific 5.11 rc1 kernel tree and a couple of packages. The output looked like this:

[ o.k. ] Installing base system [ Stage 2/2 ]
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
W: Failure trying to run:  /sbin/ldconfig
W: See //debootstrap/debootstrap.log for details
[ error ] ERROR in function create_rootfs_cache [ debootstrap.sh:177 ]
[ error ] Debootstrap base system second stage failed 
[ o.k. ] Process terminated 
[ o.k. ] Unmounting [ /home/basti/Kernel/build/.tmp/rootfs-dev-nanopct4-bullseye-no-yes/ ]
[ error ] ERROR in function unmount_on_exit [ image-helpers.sh:66 ]
[ error ] debootstrap-ng was interrupted 
[ o.k. ] Process terminated

 

I was not able to reach the mentioned debug file as it didn't exist. So, I started to test the same configuration with docker and with my desktop machine, as I wanted to rule out that it is problem with my new laptop. This didn't turn out to be the case.

Next I played around with configuration and I finally discovered that removing the `PACKAGE_LIST_ADDITIONAL` option within the `userpatches/lib.config` fixed the issue.

 

Here is the configuration I use for reference (which worked without issues around 20 days ago):

lib.config

KERNELBRANCH="branch:master"
KERNELSOURCE="https://git.linuxtv.org/media_tree.git/"


 

 

The line I deleted, I also tried it with just a single argument `python3` which failed aswell:

PACKAGE_LIST_ADDITIONAL="$PACKAGE_LIST_ADDITIONAL python3 python3-dev gcc-10 gcc python3-yaml python3-ply python3-jinja2 libgnutls28-dev openssl libboost-dev git ninja-build pkg-config debhelper dh-autoreconf autotools-dev autoconf-archive doxygen graphviz libasound2-dev libtool libjpeg-dev libudev-dev libx11-dev udev make git vim vim-nox libevent-dev python3-pip g++-10 silversearcher-ag g++ cmake"

 

config-example.conf

KERNEL_ONLY="no"
KERNEL_CONFIGURE="yes"
CLEAN_LEVEL="make,debs,oldcache"

DEST_LANG="en_US.UTF-8"

# advanced
EXTERNAL_NEW="prebuilt"
EXPERT="yes"
LIB_TAG="master"
BOARD="nanopct4"
BRANCH="dev"
RELEASE="bullseye"
EXTRAWIFI="no"
WIREGUARD="no"
AUFS="no"
INSTALL_HEADERS="no"
BUILD_MINIMAL="yes"
BUILD_DESKTOP="no"

 

I hope this can be of help to someone and maybe we can find the root of this new issue.

 

Greetings,

Sebastian

Edited by initBasti
wrong formating
Link to comment
Share on other sites

qemu: uncaught target signal 11 (Segmentation fault) - core dumped
Segmentation fault (core dumped)


Yet another Qemu bug which crashes second stage of Debootstrap and only on Bullseye arm64. It used to work, we didn't change anything ... unsupported userland, unsupported kernel ...

 

https://launchpad.net/qemu

https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=qemu;dist=unstable

https://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=debootstrap;dist=unstable

Link to comment
Share on other sites

Hey @Igor,

Thank you for the comment, I hope my message doesn't give the impression that I demand support (I should have probably chosen a better forum category ;)), as I am fully aware that my scenario is very special and I do not expect you to use your time for cases like these. I only wanted to write about how I was able to "fix" my problem so that others don't have to spend multiple hours on this problem as I did. Those links are helpful and I think the best course of action is probably to wait for a proper fix from the qemu team.

I currently install additional packages using an armbian playbook, which works quite well for the time being.

 

Greetings,

Sebastian

Link to comment
Share on other sites

I found your post on Google while looking out for why Bullseye root creating is breaking.  That's primary motivation of my comment. Second is pointing out that we will not even try to fix anything in this area even I tried to find a workaround. And I actually need to remake those cache files by hand ... 

Link to comment
Share on other sites

1 hour ago, GaryP said:

Is there any workaround for this?

 

3 hours ago, Igor said:

even I tried to find a workaround.


Not really. I tried to find one ... native compilation on arm64 build machine might work, but that is under development and also not supported method https://armbian.atlassian.net/browse/AR-457 Can and do breaks for other reasons.

Link to comment
Share on other sites

Checked out from Bullseye. The build of the full desktop on x86 passes without the quemu error, but the build of the CLI version gives an error. Perhaps the description of the CLI packages is missing some package that is critical for quemu. I want to try playing around with the package composition for the CLI.

Link to comment
Share on other sites

An interesting result is that if  add (mark) a browser group for Bullseye DE XFCE (in which one additional firefox package is esm), the build goes without errors. This means that the dependencies of the Firefox package have the necessary packages for QUEMU to work.  :)

 

Link to comment
Share on other sites

For me it remains unchanged, arm64 build crashes for me while armhf works.
 

I: Extracting zlib1g...
[ o.k. ] Installing base system [ Stage 2/2 ]
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
W: Failure trying to run:  /sbin/ldconfig
W: See //debootstrap/debootstrap.log for details
[ error ] ERROR in function create_rootfs_cache [ debootstrap.sh:180 ]
[ error ] Debootstrap base system for current bananapim64 bullseye browsers xfce no second stage failed 
[ o.k. ] Process terminated 
[ o.k. ] Unmounting [ /home/igorp/Coding/build/.tmp/rootfs-e9f77d28-aa29-441d-9772-732aa8166a12/ ]
[ error ] ERROR in function unmount_on_exit [ image-helpers.sh:65 ]
[ error ] debootstrap-ng was interrupted 
[ o.k. ] Process terminated 

 

Link to comment
Share on other sites

1 hour ago, balbes150 said:

Can you try to build rootfs on the ARM server (native build) and then try to use it on x86 (use a ready-made cache)? For me, this solved all the QUEMU error issues on x86.


I tried that 1st and that works, but building rootfs on ARM server than transferring to the x86 machine for further process is not exactly a nice solution ;) Its just a workaround.

Link to comment
Share on other sites

23 минуты назад, Igor сказал:

Its just a workaround.

Yes, this is a temporary solution. By the way, it is possible to create a ROOTFS cache (and placement on mirrors for download during the build process, as it was previously) for official versions (DE XFCE with a full set of groups + a version for the CLI). Everything else is at the discretion of the users (self-assembly). :)

Link to comment
Share on other sites

36 minutes ago, balbes150 said:

Yes, this is a temporary solution. By the way, it is possible to create a ROOTFS cache (and placement on mirrors for download during the build process, as it was previously) for official versions (DE XFCE with a full set of groups + a version for the CLI). Everything else is at the discretion of the users (self-assembly).


We had some troubles with torrents which is why all those caches were deleted - I planned to make new ones, then this bug was found. I am already building caches on ARM server now.

 

In 900s, those were made. A few more ... signing, uploading. ... some food in between, ... soon.
 

Spoiler

cache/rootfs/buster-cli-arm64.9f0d7c0c104e98b0fec20292a6acc959.tar.lz4
cache/rootfs/buster-cli-armhf.9f0d7c0c104e98b0fec20292a6acc959.tar.lz4
cache/rootfs/buster-gnome-arm64.0a21c64bd9a820e9e55790548cb0b4a9.tar.lz4
cache/rootfs/buster-gnome-arm64.7578ebe9c1c22adcb52d94ec96ae8115.tar.lz4
cache/rootfs/buster-gnome-arm64.7d486c44fbaf5f0079f937a1b259d9c9.tar.lz4
cache/rootfs/buster-gnome-arm64.c7dfc09cf4f3357c31e2a5389b320896.tar.lz4
cache/rootfs/buster-gnome-arm64.d529109aca05258f6447ae8604bb3735.tar.lz4
cache/rootfs/buster-minimal-arm64.a3231ab0e2328482ca417a6d764e3c7a.tar.lz4
cache/rootfs/buster-minimal-armhf.a3231ab0e2328482ca417a6d764e3c7a.tar.lz4
cache/rootfs/buster-xfce-arm64.1c514bcdedb4acc00d42346012a55f59.tar.lz4
cache/rootfs/buster-xfce-arm64.2743c5183ac8178f08a9b7f0c13083c7.tar.lz4
cache/rootfs/buster-xfce-arm64.9e724602796d6aa3d0461055f7bb302d.tar.lz4
cache/rootfs/buster-xfce-arm64.cce42b6920c528ef1d67fc34345ab3af.tar.lz4
cache/rootfs/buster-xfce-arm64.d76f39796b1c84abc689e40a5ad4554f.tar.lz4
cache/rootfs/buster-xfce-armhf.1c514bcdedb4acc00d42346012a55f59.tar.lz4
cache/rootfs/buster-xfce-armhf.2743c5183ac8178f08a9b7f0c13083c7.tar.lz4
cache/rootfs/buster-xfce-armhf.9e724602796d6aa3d0461055f7bb302d.tar.lz4
cache/rootfs/buster-xfce-armhf.cce42b6920c528ef1d67fc34345ab3af.tar.lz4
cache/rootfs/buster-xfce-armhf.d76f39796b1c84abc689e40a5ad4554f.tar.lz4
cache/rootfs/focal-budgie-arm64.1f1ae143eb2b098b78c5349e6a11065a.tar.lz4
cache/rootfs/focal-budgie-arm64.41a623e980b4e98c0089c9f8b87ba28f.tar.lz4
cache/rootfs/focal-budgie-arm64.6d41513169418e86fbef551f40f19c73.tar.lz4
cache/rootfs/focal-budgie-arm64.76421ceb65942924af6db583dbefd209.tar.lz4
cache/rootfs/focal-budgie-arm64.bd0a573e5f0c4a8096e941e31937b069.tar.lz4
cache/rootfs/focal-cli-arm64.27e967f3770251816c3aa52e4d52f959.tar.lz4
cache/rootfs/focal-cli-armhf.27e967f3770251816c3aa52e4d52f959.tar.lz4
cache/rootfs/focal-gnome-arm64.1097c98bc8ad497da9d9398b60dd05c5.tar.lz4
cache/rootfs/focal-gnome-arm64.6a6c8303b66fed3b67f3eb7b145d1d1f.tar.lz4
cache/rootfs/focal-mate-arm64.31d689ba44812f3fce297121fbc5b205.tar.lz4
cache/rootfs/focal-mate-arm64.37834350b769624fd5a8cb64f8dfb487.tar.lz4
cache/rootfs/focal-mate-arm64.e7c918d9622059d566cb5c3df973562f.tar.lz4
cache/rootfs/focal-minimal-arm64.a3231ab0e2328482ca417a6d764e3c7a.tar.lz4
cache/rootfs/focal-minimal-armhf.a3231ab0e2328482ca417a6d764e3c7a.tar.lz4
cache/rootfs/focal-xfce-arm64.3fdf8ea439a57dae21852f15ceeb1fc4.tar.lz4
cache/rootfs/focal-xfce-arm64.80509c5765c11b6e4087a02cb9c46ad4.tar.lz4
cache/rootfs/focal-xfce-arm64.c2ac5d68ab45c08fd3831717b71641b4.tar.lz4
cache/rootfs/focal-xfce-arm64.df04ac6f5b26cfbdcebe3118c111f739.tar.lz4
cache/rootfs/focal-xfce-arm64.ed350c867435fe6faa31da34d004391a.tar.lz4
cache/rootfs/focal-xfce-armhf.3fdf8ea439a57dae21852f15ceeb1fc4.tar.lz4
cache/rootfs/focal-xfce-armhf.80509c5765c11b6e4087a02cb9c46ad4.tar.lz4
cache/rootfs/focal-xfce-armhf.c2ac5d68ab45c08fd3831717b71641b4.tar.lz4
cache/rootfs/focal-xfce-armhf.df04ac6f5b26cfbdcebe3118c111f739.tar.lz4
cache/rootfs/focal-xfce-armhf.ed350c867435fe6faa31da34d004391a.tar.lz4
cache/rootfs/hirsute-minimal-arm64.a3231ab0e2328482ca417a6d764e3c7a.tar.lz4
cache/rootfs/hirsute-minimal-armhf.a3231ab0e2328482ca417a6d764e3c7a.tar.lz

 

 

image.png

Link to comment
Share on other sites

I got similar error: on my x64 builder with Ubuntu Focal trying to build rootfs for `hirsute` or `impish`

```

[ o.k. ] Installing base system [ Stage 2/2 ]
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
W: Failure trying to run:  mount -t proc proc /proc
W: See //debootstrap/debootstrap.log for details
W: Failure trying to run:  /sbin/ldconfig
W: See //debootstrap/debootstrap.log for details
[ error ] ERROR in function create_rootfs_cache [ debootstrap.sh:212 ]
[ error ] Debootstrap base system for current rockpi-4a hirsute   no second stage failed  
[ o.k. ] Process terminated
```

Main string is `W: Failure trying to run:  /sbin/ldconfig`

failure gone when I refresh qemu-user-static setup with text `docker run --rm --privileged multiarch/qemu-user-static --reset -p yes`

Link to comment
Share on other sites

error text was
```

[ o.k. ] Installing base system [ Stage 2/2 ]
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
W: Failure trying to run:  mount -t proc proc /proc
W: See //debootstrap/debootstrap.log for details
W: Failure trying to run:  /sbin/ldconfig
W: See //debootstrap/debootstrap.log for details
[ error ] ERROR in function create_rootfs_cache [ debootstrap.sh:212 ]
[ error ] Debootstrap base system for current rockpi-4a hirsute   no second stage failed  
[ o.k. ] Process terminated
```

Main string is `W: Failure trying to run:  /sbin/ldconfig`

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines