Jump to content

Why is virtual machine needed for building ?


Jens Bauer

Recommended Posts

As I do not have any Intel based system, and I'd like to buid Armbian, my options are fairly limited.

 

I have a PowerBook G4, but I can't build the Linaro toolchain there, because it requires autogen, which needs guile, which won't build due to undefined symbols.

 

However, the Linaro toolchain builds on my Cubieboard2.

I've read a little about how to build Armbian, and to me it looks a bit complicated (I'm used to building stuff myself, I've just never built an entire Linux system before).

 

-So why is it necessary (or recommended) to use a virtual machine for building ?

-Is it just to get a "clean system" before building, does it have to do with "cleanup" after building or is there something I've overlooked ?

 

Could a virtual machine be "emulated" by netbooting - for instance the Lamobo R1 - so it's "clean" before starting a build ?

 

I'm sorry for sounding lazy, but I think that I have enough to think about when it comes to fixing build-problems alone. ;)

Link to comment
Share on other sites

Armbian build script depends on OS specific and architecture specific tools and configuration, and the only supported environment is Ubuntu Xenial x64.

Virtual machine is recommended in case you don't want to install development tools to your home or work environment, but it is not required if you run Ubuntu Xenial x64 natively or in a container.

Link to comment
Share on other sites

Is there any reason for the build scripts to require a 64-bit architecture?

 

Eg. Any architecture would be able to "convert" a file from format1 into format2. Some architectures may be slower than others, but in the end, the job gets done (I'm not mentioning emulating a 6502 in order to run Linux on AVR).

For instance, converting a video from one format to another - or compiling source files to binary files.

 

What I'm trying to get to, is understand where the current limitation is; so I can see if I can get those tools working on a 32-bit architecture.

Link to comment
Share on other sites

What I'm trying to get to, is understand where the current limitation is; so I can see if I can get those tools working on a 32-bit architecture.

 

The 'limitation' is that Armbian supports too many boards/architectures and that we're not a large team. U-boot and kernel variants for some boards have to be built with 32-bit compilers, most rely on 64-bit (and then some patches exists and where we are aware of we included them to let the stuff build on 32-bit hosts too).

The 'dpkg --add-architecture i386' call is there to let 32-bit software run on the 64-bit build host. The opposite is not possible. And that's the simple reason why 64-bit is a requirement for the build host.

 

In case you're only after one specific board we support you might be fine with a 32-bit environment. But we're simply not able to support that. So much real work is already postponed due to lack of ressources and developers to deal with support requests at this level (since everyone can use Docker, Virtualbox, whatever + a x64 Xenial installation, there is simply no excuse to not use a 64-bit build host these days)

 

Edit: not realized before that you thought about using a PowerBook for the stuff. Nope, neither PowerPC nor ARM are sufficient, you need x86 anyway and there a 64-bit capable host for our build system (we inherit these requirements from the legacy sources we have to use). So please don't even think about anything 'bit related' but get a x64 host (emulation won't help, VirtualPC can only emulate 32-bit guests).

 

BTW: Back at the time I also used a PowerBook G4 I always had a 2nd laptop with Linux/x86 carried around. Fortunately they switched to x86 10 years ago since now all the OS I need to care about (only OS X, Linux and Solaris any more -- sometimes Windows Server) can run on a MacBook.

Link to comment
Share on other sites

As I do not have any Intel based system, and I'd like to buid Armbian, my options are fairly limited.

 

Exactly. Unless you upgrade to a MacBook anytime soon an 'Armbian appliance' could be of help: https://up-community.org/wiki/Ubuntu

 

I tried to run Armbian on the UP board maybe half a year ago but back then they only supportd their own Ubilinux and I ran in problems and stopped testing. Then the board did not boot anymore (maybe I destroyed it while doing thermal tests/optimizations), I reported in the forum back then and last week they sent me a free 'replacement board' (upgraded to 4GB RAM and 64 GB eMMC). Did not succeed in installing Ubuntu so far since as usual all my (Apple) peripherals aren't supported so I have to borrow a keyboard first from a neighbour to continue.

Link to comment
Share on other sites

Almost a week ago, I succeeded in building the Linaro Aarch64 toolchain (gcc-5.4.1) on the Cubieboard2 for the Cubieboard2.

It took half a day, but it appears to work, as I started an Armbian build several hours ago.

(building a console-jessie on a console-jessie).

It just finished building the kernel and packing it.

Getting the kernel to build does not require major changes; just a 'lite hack' in general.sh (architecture-check) and beelinkgt1.conf (required gcc version) ...

Of course I also needed apt-getting a few dependencies and building lzo+lzop.

I've started building a complete image, just to see what errors I'll bump into then. ;)

 

you need x86 anyway and there a 64-bit capable host for our build system (we inherit these requirements from the legacy sources we have to use).

That's sad. Usually such problems start years prior to they become major.

 

Unless you upgrade to a MacBook anytime soon

Little likely. I'm moving away from Mac and PC. I've been a Mac user for 11 years, and I kinda grew tired of 'cumbersome', 'no longer working', 'no support' and horrible fan-noise.

 

It's a sad story about your UP-board. It sounds like a powerful little thing, though.

A few days ago, I asked Solid-Run about a 4GB version of their ClearFog.

Yes, they can make them, but that requires a minimum order of 60 pcs.

 

Prebuilt Linaro toolchains. If you can provide all required toolchains, you can run on other architectures too, but Ubuntu Xenial requirement still stands.

Prebuilt Linaro toolchains: No problem, already done a week ago.

Ubuntu Xenial ... I like Armbian better. ;)

Link to comment
Share on other sites

(It seems I have to re-edit all my posts to make them format the way they were intended to be formatted)

 

Alright, I think I ran into some executable format errors now.

For those, who really want to do this, I'll save them some time by mentioning what I did.

For anyone following this recipe, please do not ask the team for support, as the team has plenty of work to do.

 

1: Use apt-get install libpython-dev libncurses-dev to get the python library and ncurses library for bulding Linaro

2: Clone ABE from Linaro's repository

    (git clone git://git.linaro.org/toolchain/abe.git abe) Checkout the stable branch

3: Configure ABE, then run abe.sh --tarbin --target aarch64-linux-gnu --build all

4: Extract the binaries to the desired location (for instance /opt)

5: Download and install lzo and lzop. I built lzo in the default location and placed lzop in --prefix=/opt

    (http://www.oberhumer.com/opensource/lzo/download/lzo-2.09.tar.gz)

    (http://www.lzop.org/download/lzop-1.03.tar.gz)

    -You may need to download one or both of these using your web-browser; I had some trouble getting them with curl.

6: Clone the beelink GT1 experimental

    (git clone --depth 1 git://github.com/chavdar75/lib)

7: The Linaro toolchain will not be found if you didn't install it in the default location.

    Make symlinks to the tools:

    files=(`ls -1 $linaro/bin/aarch64-linux-gnu-*`)

    for file in "${files[@]}";do echo "$file"; ln -s "$file"; done

8: Line 443 ... 447 of general.sh checks if the script is running on an ARM architecture.

    Those 5 lines need to be disabled (add '#' in front of them)

    I modified line 5 of config/sources/beelinkgt1.conf to read:

    UBOOT_NEEDS_GCC='> 3.9'

    -so any GCC later than 3.9 will work (I used GCC 5.4.1).

9: Use sudo apt-get install ccache debootstrap apt-cacher-ng to install required tools for bulding.

10: ./compile.sh BOARD=beelinkgt1 PROGRESS_DISPLAY=plain RELEASE=jessie BUILD_DESKTOP=no KERNEL_ONLY=yes

 

-If you use KERNEL_ONLY=no, you'll probably end up with an error.

'KERNEL_ONLY=yes' seems to work for me.

Again, this is a hack, do not ask for support.

However, if you find out a solution to the 'exec' problem and later problems, please post the solution.

If you need specific details on what I did, I'll be happy to post that, but I'd like to keep the recipe above simple.

Link to comment
Share on other sites

The above instructions no longer work.

 

A few things need to be done; in order to recognize the Linaro toolchain, you must make a symlink in toolchains/ to the directory containing the bin/ directory of the Linaro toolchain. For instance:

( mkdir -p toolchains; cd toolchains; ln -s /opt )

If using GCC5, you'll also need a compiler-gcc5.h (it can be found on the net by searching google for "compiler-gcc5.h".

It can be copied like this:

sudo cp ~/compiler-gcc5.h sources/u-boot-beelinkgt1/bl301/include/linux/

Sadly, it seems there's a show-stopper in line 1323 of sources/u-boot-beelinkgt1/bl301/drivers/nand/phy/phydev.c:

memcpy(&phydev_p->name, &(*dev_para)->name, MAX_DEVICE_NAME_LEN*sizeof(char));

phydev_p->name is const, and it can't be written to.

 

I've attempted fixing this a few ways, but it seems that I had no luck so far; the scripts like to 'undo' or 'conflict' with my changes.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines