• Content Count

  • Joined

  • Last visited

  1. Next run worked for me as well. As you say probably temporary. I realize the internet is overloaded at the moment. I cannot work out if the request timed out in the build system, or the remote end responded with 504. If it timed out in the build system we should worry, since crappy internet is with us for some time, and maybe the timeout value is not long enough. If it came from the other end, nothing will help. Harry
  2. Just got this error at the very end of a build. [ o.k. ] Checking git sources [ linux-firmware-git master ] fatal: unable to access '': The requested URL returned error: 504 [ .... ] Fetching updates fatal: unable to access '': The requested URL returned error: 504 [ .... ] Checking out error: pathspec 'FETCH_HEAD' did not match any file(s) known to git. I checked a very few minutes later and the URL did eventually respond with a reasonable page, though with a huge delay. Presume that had a problem, and returned the 504 error, not that the build system timed out waiting, but maybe wiser heads than me can check. If it was just lag, we may see a lot more of this sort of error, given the load on the internet at the moment.... Harry
  3. I have a supported build host running, and successfully building and testing kernels for a few targets that I have examples of. I'm looking to extend my repertoire to building complete armbian images. I think I also want to use EXTERNAL and/or EXTERNAL_NEW to configure additional packages into the image. This has led me to this stanza: BUILD_MINIMAL (yes|no): set to “yes” to build bare CLI image suitable for application deployment. This option is not compatible with BUILD_DESKTOP=”yes” and BUILD_EXTERNAL=”yes” I can find no BUILD_EXTERNAL referenced elsewhere, should this be simply EXTERNAL ? or have I missed something. Is there documentation on how EXTERNAL and EXTERNAL_NEW parameters are invoked? Harry
  4. Igor, as far as I can tell your documentation omits mention of the SYNC_CLOCK parameter to the build scripts. If your build host is a system already synced to NTP, for example by systemd-timedated this fragment from 895 # sync clock 896 if [[ $SYNC_CLOCK != no ]]; then 897 display_alert "Syncing clock" "host" "info" 898 ntpdate -s ${NTP_SERVER:-} 899 fi seems to simply mess with the system clock to no purpose. What is the objective? To force an NTP synch or to force the time into the log? timedatectl seems to give correct answers about NTP status even when chrony is managing the clock. I don't have a system to test if it works reasonably with NTPD handling the clock, but if it does, possibly preceding the ntpdate with a check using timedatectl to check if we are already synched to NTP and if so skip the ntpdate would be a good idea? Document the parameter? If we add the check for NTP synch, possibly remove the parameter? Harry
  5. I encountered this running apt-get update, not during a build. Tried several fixes without success. Eventually fixed it using info from the bug report for the last time it occurred in 2018. here sudo apt-key adv --keyserver --recv-keys ED75B5A4483DA07C Harry
  6. Looks like an expired key on EXPKEYSIG ED75B5A4483DA07C Andrey Smirnov <> Harry
  7. Built a kernel finally. . Not tested it yet, but it built with no errors. The final total of packages missing from the build was: ccache aria2 libusb-1.0-0-dev Already generated a pull request to add them AND swig python-dev libncurses-dev libstb-dev (not absolutely certain this is required, but I strongly suspect it is) lzma pixz u-boot-tools Will add an additional pull request to add those additional seven packages to the required list. Harry
  8. And another missing component. libncurses-dev. Now struggling with the STB truetype renderer. I can see its missing, but cant find a package for it Its use in u-boot is discussed here. There is a github repository, not sure how it should be installed by the build script. Harry
  9. I understand the development teams position entirely. I'm only reporting the issues I've come up against, because they are rooted in required packages that are not identified in the existing install set up. I've put in a pull request to add the first three omissions to the hostdeps list, since even the supported host configuration is also vulnerable to upstream changes to the set of standard packages. I'll put in requests to add these two as well as any more I find. So far none of the issues I've found are inherent in the choice of host, I suspect I would have has similar issues with a fresh Ubuntu install. Possibly not if those packages are installed by default in that release of Ubuntu, but it's dangerous to rely on that.
  10. More missing packages. Getting further now I have swig and python-dev installed. Still failing though. Harry
  11. Suggested changes to 796 export LC_ALL="en_US.UTF-8" 797 798 # packages list for host 799 # NOTE: please sync any changes here with the Dockerfile and Vagrantfile 800 local hostdeps="wget ca-certificates device-tree-compiler pv bc lzop zip binfmt-support build-essential ccache debootstrap ntpdate \ 801 gawk gcc-arm-linux-gnueabihf qemu-user-static u-boot-tools uuid-dev zlib1g-dev unzip libusb-1.0-0-dev fakeroot \ 802 parted pkg-config libncurses5-dev whiptail debian-keyring debian-archive-keyring f2fs-tools libfile-fcntllock-perl rsync libssl-dev \ 803 nfs-kernel-server btrfs-progs ncurses-term p7zip-full kmod dosfstools libc6-dev-armhf-cross \ 804 curl patchutils liblz4-tool libpython2.7-dev linux-base swig aptly acl python3-dev \ 805 locales ncurses-base pixz dialog systemd-container udev lib32stdc++6 libc6-i386 lib32ncurses5 lib32tinfo5 \ 806 bison libbison-dev flex libfl-dev cryptsetup gpgv1 gnupg1 cpio aria2 pigz dirmngr python3-distutils \ 807 ccache aria2 libusb-1.0-0-dev" 808 809 local codename=$(lsb_release -sc) 810 When I get my build environment working, and can adequately test, I'll submit this as a pull request. Harry
  12. I know I'm not supposed to pester with issues on non supported build platforms. This is probably due to my choice of build platform, but in case it's not: build is quitting with this log.... scripts/dtc/pylibfdt/Makefile:27: recipe for target 'scripts/dtc/pylibfdt/' failed scripts/ recipe for target 'scripts/dtc/pylibfdt' failed scripts/ recipe for target 'scripts/dtc' failed Makefile:528: recipe for target 'scripts' failed [ error ] ERROR in function compile_uboot [ ] [ error ] U-boot compilation failed [ o.k. ] Process terminated That first make fail seems to occur in here: 13 PYLIBFDT_srcs = $(addprefix $(LIBFDT_srcdir)/,$(LIBFDT_SRCS)) \ 14 $(obj)/libfdt.i 15 16 quiet_cmd_pymod = PYMOD $@ 17 cmd_pymod = unset CROSS_COMPILE; unset CFLAGS; \ 18 CC="$(HOSTCC)" LDSHARED="$(HOSTCC) -shared " \ 19 LDFLAGS="$(HOSTLDFLAGS)" \ 20 VERSION="u-boot-$(UBOOTVERSION)" \ 21 CPPFLAGS="$(HOSTCFLAGS) -I$(LIBFDT_srcdir)" OBJDIR=$(obj) \ 22 SOURCES="$(PYLIBFDT_srcs)" \ 23 SWIG_OPTS="-I$(LIBFDT_srcdir) -I$(LIBFDT_srcdir)/.." \ 24 $(PYTHON2) $< --quiet build_ext --inplace 25 26 $(obj)/ $(src)/ $(PYLIBFDT_srcs) FORCE 27 $(call if_changed,pymod) 28 29 always += 30 31 clean-files += libfdt.i libfdt_wrap.c I suspect something amiss with the python config on my build host, but python is definitely not my strong suit. Anybody suggest tests to verify the python config? Harry
  13. I'm not inclined to use docker, nor vagrant, and I don't have access to a Ubuntu system these days, all the systems I have to hand are Linux Mint. Linux Mint tricia is a derivative of Ubuntu 18.04, the supported build host. So I'm being a naughty boy and trying to set up a build environment on an existing Mint system. I'm not expecting support from here on setting this up, but a few of the hurdles I've overcome may reflect bugs/missing documentation in the supported build system. The setup failed initially for the test for supported systems. Fixed that, not of interest here. It failed for lack of ccache, again for lack of aria2 and again for lack of libusb-1.0. I've tried to determine if those would be installed in a new Ubuntu installation, but I don't seem to be able to locate the package manifest for 18.04. So if someone can check if they are default installed, I'll forget about it, but if they are missing from a default Ubuntu install, I'll look at documenting the need for them. Harry
  14. Looks like it's included by default, compiled in, not a module. Presumably because some users will be using SLIP connections? I understand that the kernel impact of compiling the module in is zero. Marginally irritating, I have one box with a config with two dummies. On a Raspbian box though, where dummy is a module. I guess I'm going to have to re-learn how to compile my own kernel.... from config-5.4.20-sunxi CONFIG_NET_CORE=y CONFIG_BONDING=m CONFIG_DUMMY=y <<<<<<<<<<< compiled in, not modular. # CONFIG_EQUALIZER is not set CONFIG_IFB=m # CONFIG_NET_TEAM is not set From Kconfig... config DUMMY tristate "Dummy net driver support" ---help--- This is essentially a bit-bucket device (i.e. traffic you send to this device is consigned into oblivion) with a configurable IP address. It is most commonly used in order to make your currently inactive SLIP address seem like a real address for local programs. If you use SLIP or PPP, you might want to say Y here. Since this thing often comes in handy, the default is Y. It won't enlarge your kernel either. What a deal. Read about it in the Network Administrator's Guide, available from <>. To compile this driver as a module, choose M here: the module will be called dummy. If you want to use more than one dummy device at a time, you need to compile this driver as a module. Instead of 'dummy', the devices will then be called 'dummy0', 'dummy1' etc.
  15. Further I've looked high and low for the dummy driver, without success. Is the dummy driver compiled in to the Armbian kernel, and if so why ? Harry