Jump to content

Recommended Posts

Posted

This might be close to be off-topic, but I think this is the closest forum for this thread.

 

Since I read that Igor asked if we know any build-farm services (clouds like Amazon's EC3 or the like), I've been thinking about private build farms.

 

I imagine that Igor does not have a large build farm (perhaps some boards are contributing with distcc).

Thomas, what is your setup for building Armbian ? (I imagine that you may have a stronger setup).

Other developers, please chip in too. :)

 

What's been in my mind for a while, is to make a low-cost farm, which would perform well and make things easier and quicker for the Armbian team. There's a lot of maintaining of the source code, so I doubt there's time for expanding compile-clusters.

 

So my idea is that first of all, I get that ubuntu Xenial running on a 64-bit intel CPU is the minimum requirement for a build-master.

That is due to some of the tools that Armbian uses. If those tools were able to run on an ARM-based architecture, it would be easier for me to find a good build-master for Armbian (I'm going to use MacchiatoBIN for my own builds).

The slaves would most likely be MiQi boards as they perform better than even the S912 competitors (but they also use more electricity).

Finally I plan to use solar power to assist in paying the bill. ;)

 

When all is set up, I think it would be very useful to add external build slaves as well (such as every developer's build slaves, so that distcc can do the job as quickly as possible. I know that more isn't always better; it depends on the connection speeds and the speeds of the slaves, latencies and a lot of other things.

-Still the goal is to get an almost 'free' build farm, which is able to build Armbian on its own (so if you're on a non-intel platform (such as PowerPC or ARM), then you will be able to make your own builds anyway by using my build-master.

On the other hand, if you have a 64-bit intel machine, then it would probably be beneficial to use that as a build-master, because local is usually faster than remote. It can use the compile-cluster for compiling, but the intel-only tools could be run locally.

 

I might not be able to succeed in all this; it's an expensive project and I have many higher priorities than making a build-farm; but if things go my way, I might be able to move the priorities around (I certainly want to).

Maybe I'll be able to start out with a few MiQi boards and no build-master and then expand over time.

Posted

Well, my build times won't improve with a 'build farm' or something like that but only with a huge refactoring of Armbian's build system which won't happen anytime soon or at all.

 

Maybe better read through former discussions/approaches first?

 

Posted
4 hours ago, tkaiser said:

Well, my build times won't improve with a 'build farm' or something like that but only with a huge refactoring of Armbian's build system which won't happen anytime soon or at all.

 

Maybe better read through former discussions/approaches first?

I'm only 1/3rd through the reading.

These ideas might not be usable, but I'll dump them here anyway... ;)

While reading the first thread, I kept thinking about the parallel compressors (I've been using them for more than a year).

pigz, pbzip2, pxz, plzip. I've rebuilt tar, so it uses the parallel versions by default.

-So if compression takes a long time and you don't have a parallel tar yet, I recommend building one. It doesn't always produce the exact same results for the same files; sometimes the archives use a few more bytes, sometimes they even use less than the non-parallelized version, but I've tested the archives and they're completely valid, even though the size differ.

(in fact it should be possible to make a fairly simple "dist"-tar script, which could copy the image to a random-picked TV-box that will do the actual compression and then feed the compressed result back to the script; then the "dist"-tar script's parent could run in the background and upload the data when they get back).

-I very much second using btrfs (I already use it myself).

I wonder if an overlayfs can be of any use (eg. a "temporary file system" for a quick-clean perhaps - or some sandboxing).

Posted
1 hour ago, Jens Bauer said:

-So if compression takes a long time and you don't have a parallel tar yet, I recommend building one. It doesn't always produce the exact same results for the same files; sometimes the archives use a few more bytes, sometimes they even use less than the non-parallelized version, but I've tested the archives and they're completely valid, even though the size differ.

"Long time" is relative. Currently we are not using tar compression directly. Kernel sources are compressed with pixz (slow but parallel) and packages are built using dpkg-deb/dpkg-buildpackage, and there parallel compression is not supported on Xenial (at least without hacks or recompilation, but we are trying to keep the setup portable, easily reproducible and non-invasive for the build host OS.

 

1 hour ago, Jens Bauer said:

-I very much second using btrfs (I already use it myself).

Again, making btrfs a hard requirement means affecting (=breaking) existing build system users and complicating initial environment setup, and making it optional means duplicating and rewriting a lot of functionality.

 

1 hour ago, Jens Bauer said:

I wonder if an overlayfs can be of any use (eg. a "temporary file system" for a quick-clean perhaps - or some sandboxing).

overlayfs can be used already for u-boot/kernel compilation to minimize writes to persistent storage, but it's not enabled by default due to possible side effects of using the default /tmp which is usually limited to 1/2 of available RAM - and it can be too low i.e. when compiling the kernel with debug info.

Posted
9 minutes ago, Jens Bauer said:

I've rebuilt tar, so it uses the parallel versions by default.

 

Me not since I use GNU tar everywhere and stuff like the following is all that's needed to speed up unpacking of Linaro toolchains (the only remaining tar call where this would improve speed a little bit in Armbian's build system so it's not worth the efforts anyway):

macbookpro-tk:build tk$ PXZ=$(which pxz 2>/dev/null) && alias tar="tar --use-compress-program=${PXZ}"
macbookpro-tk:build tk$ type tar
tar is aliased to `tar --use-compress-program=/usr/local/bin/pxz'

Besides that as Zador already explained complicating the host setup is not an option and without huge refactoring we won't gain that much speed improvements anyway since stuff like parallelism is already used where possible.

 

And for experienced users it's always possible to make use of nice FS features (I for example have both cache/sources and cache/toolchains symlinked from a btrfs force-compression=zlib partition allowing me to save a lot of disk space without hurting performance)

Posted
8 hours ago, tkaiser said:

 

Me not since I use GNU tar everywhere and stuff like the following is all that's needed to speed up unpacking of Linaro toolchains (the only remaining tar call where this would improve speed a little bit in Armbian's build system so it's not worth the efforts anyway):


macbookpro-tk:build tk$ PXZ=$(which pxz 2>/dev/null) && alias tar="tar --use-compress-program=${PXZ}"
macbookpro-tk:build tk$ type tar
tar is aliased to `tar --use-compress-program=/usr/local/bin/pxz'

Besides that as Zador already explained complicating the host setup is not an option and without huge refactoring we won't gain that much speed improvements anyway since stuff like parallelism is already used where possible.

 

And for experienced users it's always possible to make use of nice FS features (I for example have both cache/sources and cache/toolchains symlinked from a btrfs force-compression=zlib partition allowing me to save a lot of disk space without hurting performance)

 

I know Zador mentioned that, but I find that it doesn't hurt much to have tools optimized by default on your daily-work machine - except from that any need for optimization of the compression in the scripts won't be noticable that way. ;)

Posted
9 hours ago, zador.blood.stained said:

"Long time" is relative. Currently we are not using tar compression directly. Kernel sources are compressed with pixz (slow but parallel) and packages are built using dpkg-deb/dpkg-buildpackage, and there parallel compression is not supported on Xenial (at least without hacks or recompilation, but we are trying to keep the setup portable, easily reproducible and non-invasive for the build host OS.

High compression ratio is a good thing when Igor only has a 10Mbit line.

I'm very much thinking that optimizing those developer's systems who works with it daily has the highest priority.

-Of course, it's important to have a fast build system in general, but everyone have different platforms anyway, so like you say, a long time is a relative term. But if the developers who contribute to git regularly can get a speedup, I think it would help them from avoiding stress and (some) frustration.

 

9 hours ago, zador.blood.stained said:

Again, making btrfs a hard requirement means affecting (=breaking) existing build system users and complicating initial environment setup, and making it optional means duplicating and rewriting a lot of functionality.

Does that mean that if I set up my system to use btrfs, that it would affect armbian builds in a negative way ?

 

9 hours ago, zador.blood.stained said:

overlayfs can be used already for u-boot/kernel compilation to minimize writes to persistent storage, but it's not enabled by default due to possible side effects of using the default /tmp which is usually limited to 1/2 of available RAM - and it can be too low i.e. when compiling the kernel with debug info.

Answer accepted. :)

If I come up with any other junk-ideas, I'll post them - if just one of them is helpful, it's worth it. ;)

Posted
12 hours ago, Jens Bauer said:

I'm very much thinking that optimizing those developer's systems who works with it daily has the highest priority.

I would say that we have a pretty different systems. Running (for example) on an old dual-core Athlon64 based machine vs (for example) on a 20+cores Xeon machine requires a different set of optimizations in order to get any noticeable difference.

 

And again, compression is not the main bottleneck, at least for me, but it may be for other people.

 

12 hours ago, Jens Bauer said:

Does that mean that if I set up my system to use btrfs, that it would affect armbian builds in a negative way ?

No, this means that if we require that you have to use btrfs in order to build Armbian (and the build script will refuse to work on ext4) it will affect Armbian build system users in a negative way.

Posted

I have a VM running on half the resources of a 6-core i7-6800k (32 GB RAM) (Hardware virtualization enabled) and another on a dedicated machine (i7-3770 and 16 GB RAM).  Both are roughly equivalent in performance, both using 7200 RPM spinning disk drives.  I'd jump to SSD if I wanted to speed things along, as it stands I'm at 8 minutes for the kernel/u-boot after the initial build.

Posted
12 hours ago, Jens Bauer said:

High compression ratio is a good thing when Igor only has a 10Mbit line.


I found out that I can upgrade to 300/50 for no extra money or hardware upgrade and I did it a week ago. Upload is now 5x faster and can be further upgraded but I think there is no need. Daily kernel recompilation is present on the download/repository server until morning - recompilation starts at 00.01. Hopefully, there should be less different kernels to build, so this setup should last. Only server memory is getting low at the moment since we get new sources - I tend to cache them to gain some speed and save SSD. Building utilisation is low at .deb packing and EXTRA packages building as Zador already explained ...

Posted
On 9/17/2017 at 12:43 PM, Igor said:


I found out that I can upgrade to 300/50 for no extra money or hardware upgrade and I did it a week ago.

That's great. :) - now you have a connection faster than mine. ;)

 

On 9/17/2017 at 12:43 PM, Igor said:

Hopefully, there should be less different kernels to build

That should help quite a bit. 

 

 

On 9/17/2017 at 12:43 PM, Igor said:

Only server memory is getting low at the moment since we get new sources - I tend to cache them to gain some speed and save SSD.

That probably means the best way I can contribute is to send a donation (which I've scheduled for the beginning of next month - I wish I could send more, but a small donation can help too).

Is it possible for you to add RAM to your build-system - if so, what's the price of the maximum module size that can be added ?

Posted
5 hours ago, Jens Bauer said:

Is it possible for you to add RAM to your build-system - if so, what's the price of the maximum module size that can be added ?


I can put in another 64Gb ... registered ECC, around 200 EUR / 16G module * 4 = 800 EUR or doubled if I go for a 128Gb upgrade with 32Gb modules. Not sure if that is supported by the board.

 

Board is one of the smallest, ATX sized dual CPU with just 8 memory slots.

P_setting_fff_1_90_end_500.png

Posted
On 9/19/2017 at 7:27 AM, Igor said:


I can put in another 64Gb ... registered ECC, around 200 EUR / 16G module * 4 = 800 EUR or doubled if I go for a 128Gb upgrade with 32Gb modules. Not sure if that is supported by the board.

I don't think I'll be able to donate 200 Euro this month, but I'll send an initial donation, so you at least can have an extra cup of coffee or tea. ;)

In around a month, I might be able to send a little more, though; in total, it should be able to cover one 16GB module. Hopefully a few others will see this and add to the pool.

 

On 9/19/2017 at 7:27 AM, Igor said:

Board is one of the smallest, ATX sized dual CPU with just 8 memory slots.

Looks like a fairly good purchase. :)

 

Posted

Thank you.

 

I just found out its a bad time for upgrading memory - current price > 250€ :( so my hardware dealer arranged me an alternative which is "only" 200€. Different brand and the only technical diff is single vs double rank. I will see next week if this works.

 

A year ago price for this memory was just little more than 100€ and was waiting for a better price to upgrade :D

 

Just now, Jens Bauer said:

Hopefully a few others will see this and add to the pool.


Me too.

Posted
9 hours ago, Igor said:

Thank you.

 

I just found out its a bad time for upgrading memory - current price > 250€ :( so my hardware dealer arranged me an alternative which is "only" 200€. Different brand and the only technical diff is single vs double rank. I will see next week if this works.

Personally I prefer to purchase something that I can use in a later machine if possible - but this all depends on how you look at it. If the board is going to be busy doing a lot of work, then some cheaper but not-so-capable modules might be a better overall purchase, but I'll leave that up to you to decide. ;)

 

9 hours ago, Igor said:

 

A year ago price for this memory was just little more than 100€ and was waiting for a better price to upgrade :D

 

I've been there a couple of times. It's even worse when a product vanishes completely.

Eg. you don't find 2.5" PATA drives everywhere anymore (they're still on eBay).

Almost can't purchase them in Denmark from the stores (well you can, but they cost 3 times a SATA drive and there is only one to choose between).

Posted
On 22. 9. 2017 at 6:28 PM, Jens Bauer said:

then some cheaper but not-so-capable modules might be a better overall purchase


I got modules today but the server does not like them :angry: That means upgrade costs jumped to 1k EUR. Buy cheap buy twice :P 

Posted
On 9/25/2017 at 11:59 AM, Igor said:


I got modules today but the server does not like them :angry: That means upgrade costs jumped to 1k EUR. Buy cheap buy twice :P 

That's terrible. :(

Is it possible to return them ?

If they can't be used in another board / machine, try selling them.

(Personally I'll be purchasing some old 16GB DDR3/133MHz/EEC RAM modules later, but it'll have to wait till after I've sent the donation(s)).

Posted
Just now, Jens Bauer said:

Is it possible to return them ?


I already suspected there are slim chances that they will work ... Since those were the only one on a stock with the option to return I said "why not". After rebooting the server one of ethernet ports remain dead :) Strange. Well. I have absolutely no other option than exactly those: https://www.heise.de/preisvergleich/crucial-dimm-16gb-ct16g4rfs424a-a1345631.html Nothing else works. I have tried 3 different before.

Posted
4 hours ago, Igor said:

I already suspected there are slim chances that they will work ... Since those were the only one on a stock with the option to return I said "why not". After rebooting the server one of ethernet ports remain dead :) Strange. Well. I have absolutely no other option than exactly those: https://www.heise.de/preisvergleich/crucial-dimm-16gb-ct16g4rfs424a-a1345631.html Nothing else works. I have tried 3 different before.

I wonder if ESD caused the ethernet port to malfunction; it's little likely that it's the RAM (unless it caused data-corruption, so the configuration got messed up).

Anyway, you certainly need more donations!

(I just noticed that two torrents seem to be unavaiable; the downloads started, but they stay at 0%: Both are Armbian_5.33_Nanopim3_Ubuntu_xenial_dev_4.11.12; one desktop, one server -- update: they're started and are downloading).

Posted
4 hours ago, Igor said:


I already suspected there are slim chances that they will work ... Since those were the only one on a stock with the option to return I said "why not". After rebooting the server one of ethernet ports remain dead :) Strange. Well. I have absolutely no other option than exactly those: https://www.heise.de/preisvergleich/crucial-dimm-16gb-ct16g4rfs424a-a1345631.html Nothing else works. I have tried 3 different before.

I think I found them on eBay (in the UK): http://www.ebay.com/itm/401409656790

I also searched the best computer-store in Denmark, so if everything else fails, there might be a chance: https://www.computersalg.dk/l/0/s?sq=CT16G4RFS424A

Posted

- build farm memory successfully upgraded to 128MB (2x32GB leaving space for two more). Saving some 30 minutes of time every day and SSDs wear out.

- main download server was also moved this weekend to gigabit line. The speed increase was very nice addon but the problem is that we are limited to 10TB/month in this package. First day traffic RX bytes:12556064698 (12.5 GB)  TX bytes:309580270956 (309.5 GB) tells everything :)

 

Infrastructure costs just went up and will go up once again in next month since there is no way we could stay within such limits.

Posted

@Igor  I could possibly arrange / donate a couple of regional mirrors for a 12 month period, see how it works out. We have UK, US, & German CDN's with a lot of spare capacity. Could probably provide some build resources too if you have a simple way to distribute your build jobs. We have a lot of spare capacity in our Azure contracts during UK / EU evenings with locally attached PCI-E SSD's for scratch build space. Im quite happy to support an interesting project that my own company may possibly benefit from in the future.

 

Also worth looking a Cloudflare's free tier. Could save a lot of bandwidth in terms of image and package downloads. It reduced our image update distribution bandwidth requirements by 65-70% ish.

Posted

Thank you! Additional mirrors should help. Technical requirements: ~ 500Gb space, rsync, IPv6 ... What else? I have little to no experiences to set automated mirrors up ... we can divert/balance download directly at download page so we don't need any advanced schema? Package repository mirroring. Manual or automatic? I would add apt.uk.armbian.com, dl.uk.armbian.com, ... 

Posted

We could switch to torrent based image distribution (and leave images with throttled download speed as a fallback) if hosting providers don't have any objections against torrent traffic. If we have multiple servers we could just simply use them to seed the torrents.

Packages will be mostly downloaded when we release stable images, so they shouldn't generate much traffic compared to images.

Posted
12 hours ago, botfap said:

Also worth looking a Cloudflare's free tier. Could save a lot of bandwidth in terms of image and package downloads. It reduced our image update distribution bandwidth requirements by 65-70% ish.

Igor said some time ago that it's too much already for the free tier.

Posted

@Igor  Ok Instead of integrating into our CDN which could be a pain without an infrastructure plan, I will keep it simple and provision for 12 months a UK server with 100GB sys / 500GB data storage & 2TB / month data allowance (can be upped if needed). Ill stick a load balancer / cache on the front of it for the public access mechanism and security and it can host as many repos / domains as you like.

 

Do you have a preference for OS / services or dont care? Ubuntu 16.04 is our normal platform but I can provide licences for RHEL7 or Windows Srv 2016 if needed

I presume you will want SSL enabled http repos. If you or whoever helps you wants to pull my personal email from the forum registration or PM on the forum with any specifics you need I can deploy and reply with access details etc and setup a plan to clone the primary repo and test. I will also give you the public ip(s) of the load balancer so that the *.uk.armbian dns records can be setup.

Posted

@zador.blood.stained  I do like the idea of torrent based distribution but in practice most users ignore it and go for the "backup" option of direct download anyway. We  tested this for a client in 2015 and the number of users opting for the torrent download was less than 1 in every 1500 downloads, just not worth the effort. Even when they successfully completed a torrent download pretty much everyone stops seeding immediately anyway. Attitudes might be different in 2017 but lots of networks that didnt block torrents in 2015 now do.

 

If you have more recent or different experience Id be happy to setup the infrastructure and have a crack at it, especially as the Armbian user base is generally a lot more capable and informed than Joe Public. Might be a different result.

 

 

Posted

Hi botfap,

 

Great news - thank you very much.

We had long discussion 3-5months ago about the offer, the layout and the option for TORRENT. I am totally in favor of torrent and will always prefer this, than direct download.

Big advantage of torrent, it does automatically check with SHA-key if the file downloaded is complete.

 

You are correct, that my transmission-box is not always online, but whenever it is, it will share what I downloaded before. I recognized that Ubuntu Mate or Solus OS us a different service for distribution, than armbian. Their distribution service seems to work better - I guess these are different provider ?

 

If you don't have the torrent client 'transmission', get it from here https://transmissionbt.com/  for almost every platform and within armbian you can install it via 'Softy'.


Discussions:

Info transmission package

https://forum.armbian.com/index.php?/topic/4683-info-transmission-package/

 

Seed our torrents

https://forum.armbian.com/index.php?/topic/4198-seed-our-torrents/

 

Posted (edited)
1 hour ago, botfap said:

I do like the idea of torrent based distribution but in practice most users ignore it and go for the "backup" option of direct download anyway.

Well, if you have a torrent client already installed I don't see why you would want to find other download options.

 

1 hour ago, botfap said:

Even when they successfully completed a torrent download pretty much everyone stops seeding immediately anyway.

I was talking about the torrent option also as a way of distributing the load across different servers. It's relatively easy to add or remove new seed servers compared to setting up a distributed HTTP(s) based solution.

Edited by zador.blood.stained
Removed the 3rd part of the message
Posted
2 minutes ago, zador.blood.stained said:

Well, if you have a torrent client already installed I don't see why you would want to find other download options.

I do, you do, but most of my clients dont. I was thinking from a commercial end user perspective which maybe doesn't apply here.

 

7 minutes ago, zador.blood.stained said:

I was talking about the torrent option also as a way of distributing the load across different servers. It's relatively easy to add or remove new seed servers compared to setting up a distributed HTTP(s) based solution.

I misunderstood sorry, I thought you meant using the armbian users / down loaders to provide additional bandwidth. It now makes much more sense. Especially when you can go to the likes of Vultr, etc and get bandwidth at 3eu per TB for just he hours you need. It would be fairly straight forward to spin up extra nodes / bandwidth very cheaply this way at release time and pay for just a few days a month or peak hours etc.

 

And yes many public networks in the UK block torrents. This isnt specific to the UK, I travel quite a lot with work, mainly Germany, Italy, Spain, Austria and the US and find that public networks all over are blocking torrents more and more often. Obviously its easy to bypass with a VPN but torrent traffic is not welcome on many networks across Europe and the US.

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines