etrash2000 Posted January 21, 2016 Posted January 21, 2016 (edited) Hi all, I bougth a couple for Bananian Pi in order to set up (a NAS +) Local Backup + Remote Backup. I'm in the phase of deciding on distribution. What are the main differences between Armbian and Bananian? Edited January 29, 2016 by wildcat_paris typo fix
Tido Posted January 21, 2016 Posted January 21, 2016 There are several Banana's available, can you be more specific what you have bought?
etrash2000 Posted January 21, 2016 Author Posted January 21, 2016 it is the M1. http://www.armbian.com/banana-pi/
Tido Posted January 21, 2016 Posted January 21, 2016 good choice - of the Banana familiy. Both base on Debian - concerning troubleshooting and help I recommend armbian. Did you already read this: http://www.armbian.com/documentation/ 1
etrash2000 Posted January 21, 2016 Author Posted January 21, 2016 Yes, i have seen that page. I also know that both of them are using Debian repositories, so I assume that the difference is in how the kernel is built. I did a very rudimentary test run with both Bananian and Armbian. The throughput with sftp were similar for both at 10MB/s over WiFi, mainly limited by CPU on the board. This is fine with me since my internet upload is 10mbps to the remote site. My goal is to build a long-term backup solution for my photos, home videos and some documents, totaling about 300-400GB. I don't want to touch the system after first launch (with exception of updates). My hope is Armbian which support more plattforms will be more stable and secure than Bananian as it could attract more users. Btw, i'm om Jessie, vanilla image. 1
tkaiser Posted January 22, 2016 Posted January 22, 2016 In my eyes it's relatively easy (disclaimer: I used/contributed to Bananian first but switched very fast to Armbian over a year ago). Simply count boards supported OS 'variants' commits made amount of available documentation active contributors and decide then. Nico (the individual behind Bananian) is focused on one secure headless distribution (keep that in mind if you use insufficient tools like sftp since 'performance' might be influenced massively by the SSH ciphers being negotiated between 'client' and 'server'!) but doesn't spend that much time on further development in the meantime. I wouldn't say Bananian is a bad choice but the 'speed' of development already scares me a bit. As an example: Bananian uses an ugly hack (provided by me) to read out the SoC temperature. In the meantime better alternatives exist and I made suggestions to Nico to improve this. But no interest, they still just try to maintain these ugly hacks. Unfortunately this is true for other areas as well. I would believe we'll see a lot of improvements in and around Armbian this year therefore the choice is easy 2
etrash2000 Posted January 22, 2016 Author Posted January 22, 2016 Thanks TKaiser, I was hoping that you would respond since I read most of your comments on the Bananian forum and I knew that you had switched. The reply is as I expected (which is why I posted the question here instead of at the Bananian forum) but I would like to have it confirmed by someone which has used both systems before I commit any work on this small project. Concerning the layout of the project, I have not decided yet on any specific protocol yet. I will later post about my ideas later when I have tried a few things and when I more time. In the mean time, if anyone of you have some ideas or experience with backups then feel free to keep this tread alive (if it is allowed the forum rules). The things I want is * Local daily backup of my laptop. Revision control and check-sum control are required. Monthly checksums will go into a local database. * Local offline backup to an external USB-disk, probably will commit this manually 2-4 times a year. * Remote backup, daily or weekly. Probably with rsync to local server. Checksum control here as well. * E-Mail daily progress from both local and remote servers, preferably encrypted. * Move rootfs to ram in order to avoid the SD card once the system is up and running. * I have not decided on how to obtain my local/remote IP address, probably DynDNS but emails could be an option. Background info: I have about 300GB of photos and home videos which I do not want to loose. Those damn cameras are just getting bigger and bigger image sizes . Thanks again. 2
tkaiser Posted January 22, 2016 Posted January 22, 2016 What you're trying to achieve is possible with both Bananian and Armbian. I would still prefer the latter since development happens faster and it's more flexible (think about when you want to switch to a better platform like Marvell Armada 38x to be used as NAS) Regarding backup stuff I would always rely on btrfs these days and already outlined some of the many advantages you get: http://linux-sunxi.org/Sunxi_devices_as_NAS#New_opportunities_with_mainline_kernel Simply forget about rsync and checksumming above the filesystem layer. Let the FS do all the work. It's 2016 and not 2006 any more 1
zador.blood.stained Posted January 22, 2016 Posted January 22, 2016 Simply forget about rsync and checksumming above the filesystem layer. Let the FS do all the work. It's 2016 and not 2006 any more Let's not forget about rsync just yet 1. It's still great for a client-server incremental backups (Linux->Linux), while for more complex setups something like syncthing would be better (although syncthing is very heavy on CPU usage, which will limit transfer speed to 2-3MB/s for example on A20-based devices) 2. @etrash200 mentioned remote server and dyndns, so for backups over Internet using checksumming on application level won't be totally redundant, some ISP's infrastructure still may be from 2006. @tkaiser, you don't like downloading OS images without checksum files, do you? 2
tkaiser Posted January 22, 2016 Posted January 22, 2016 If we're talking about Linux -> Linux then choosing btrfs on both sides would be the best solution since you can then create consistent syncs (impossible with rsync!) in fewer time more safely. Just create a btrfs snapshot on the client device (and flush open databases before) and send the snapshot to the remote server using 'btrfs send/receive'. I don't now whether btrfs' FAR format (to transport the snapshot) uses the source's checksums and compares with the destination's in the meantime but that's something a subsequent rsync (--dry-)run could provide. We do syncing with a similiar approach (relying on Solaris and ZFS but also using snapshots and 'zfs send/receive') through VPNs for quite some time and never experienced any problems since the transmitted data was also checked/retransmitted at the TCP/IP layer. Well, I know that saying 'forget about well known mechanisms and become an early adopter' sounds somewhat weird when it's about backup. But if no one starts implementing/testing new methods things can't improve and the word won't be spread (that there exist far better alternatives to well known tools in the meantime) 1
etrash2000 Posted January 22, 2016 Author Posted January 22, 2016 It's 2016 and not 2006 any more This is true but unfortunately, my knowledge stops at 2006 . I will try to read up and do a test run with btrfs to see if it is simple enough. I currently waiting on the harddrives which are delayed 2 weeks. Can btrfs tools do the following: * re-evelate the checksum of the file on the disk without rewriting the file. * export the checksum to an external database in order to keep historical data so i can know when the change/corruption occurred. 1
Rui Ribeiro Posted January 23, 2016 Posted January 23, 2016 I do not want to sound rude, however I suspect tkaiser is being kind to bananian. Within a few minutes of using bananian I was horrified by it. Armbian seems to have much more work put on it. And the OpenWRT of SinoVoip, I tested it and it was even worse than bananian. About btrfs, never used it...do you consider it production ready now, tkaiser? 1
tkaiser Posted January 23, 2016 Posted January 23, 2016 About btrfs, never used it...do you consider it production ready now, tkaiser? You must use mainline kernel since nearly all the btrfs code lives in the kernel. Never ever try to use with outdated kernels like 3.4.x or even 3.10 (we had +30TB data loss at a customer relying on a broken btrfs feature and that was with 3.12 or 3.10). And then you've to read through the docs if you want to make use of more advanced features. Can btrfs tools do the following: * re-evelate the checksum of the file on the disk without rewriting the file. * export the checksum to an external database in order to keep historical data so i can know when the change/corruption occurred. You run 'btrfs scrub' regulary and check the output. There's no need to export checksums and store it anywhere above the filesystem layer since they are integral part of the filesystem itself: https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs-scrub http://marc.merlins.org/perso/btrfs/post_2014-03-19_Btrfs-Tips_-Btrfs-Scrub-and-Btrfs-Filesystem-Repair.html Important to note: When choosing Debian Wheezy you're a bit in trouble due to horribly outdated btrfs-progs packages (this package contains all the userland stuff for btrfs. Wheezy ships with a v0.19 or v.017 from 1876 or even older ). So either go with jessie, use backports or build btrfs-progs from sources. 2
etrash2000 Posted January 23, 2016 Author Posted January 23, 2016 I'm using Jessie. i dont think chksum in fs can really do anything about deliberate change by accident/virus/worm so i need to track which version on 4-5 hdds is correct. I had an "accident" once where several files were present but the all content were gone (0B). From fs point of view, there was no problem but obviously the photos were gone. Sent with Tapatalk
tkaiser Posted January 25, 2016 Posted January 25, 2016 i dont think chksum in fs can really do anything about deliberate change by accident/virus/worm so i need to track which version on 4-5 hdds is correct. I still don't get it. Please read the link already provided carefully: http://linux-sunxi.org/Sunxi_devices_as_NAS#New_opportunities_with_mainline_kernel You're interested in this snapshot, backup and "send/receive" stuff. If setup correctly you get a really high level of protection. Just inform yourself and be open for new (not really) concepts. With filesystems that make use of snapshots you don't have to fear any accident/virus/worm since even if all your data got deleted you simply laugh and restore the latest snapshot instantly. But's that quite normal: we have customers that spend $50k on proprietary backup solutions and don't use ZFS capabilities where they would get more for free (backup and a failover setup and desaster recovery with minimised downtimes). BTW: If you really care about your data I would choose different hardware (at least capable of using ECC RAM) and different concepts (archive -- data gets imported and isn't changeable by design afterwards).
Skygod Posted January 26, 2016 Posted January 26, 2016 * I have not decided on how to obtain my local/remote IP address, probably DynDNS but emails could be an option. I registered a domain at Godaddy and have a python script that runs every ten minutes and updates my A record if my dynamic IP has changed. I can supply details if this is of interest.
etrash2000 Posted January 26, 2016 Author Posted January 26, 2016 @tkaiser, I was actually planing to use btrfs as an underlying fs due to the error checking, even though I don't not know most of all its benefits yet. I will try to read up when I have time and develop the setup over the next few months. I might buy another board or use the old Rasberry Pi with an external USB disk to play around with. Concerning your comment about archive, could you please explain it, as I dont think you mean TAR (which I never liked). Concerning snapshots, I'm using snapshots with my virtual box in order revert the system to a known state. Admittedly, it is a very nice feature to learn more about how it works on btrfs. @ Skygod, it is a good idea worth checking. You are more than welcome to give the details and scripts.
Skygod Posted January 26, 2016 Posted January 26, 2016 The python script I have is : #!/usr/bin/env python3 # -*- coding:utf8 -*- import logging import pif import pygodaddy logging.basicConfig(filename='godaddy.log', format='%(asctime)s %(message)s', level=logging.INFO) GODADDY_USERNAME="xxxxxx" GODADDY_PASSWORD="xxxxxx" client = pygodaddy.GoDaddyClient() client.login(GODADDY_USERNAME, GODADDY_PASSWORD) logging.debug("Started: before 'for'") for domain in client.find_domains(): dns_records = client.find_dns_records(domain) # Old ip.. old_ip = list(dns_records)[0].value # 'New' ip! public_ip = pif.get_public_ip() # Some loggings and printing logging.debug("Domain '{0}' DNS records: {1}".format(domain, old_ip)) print("Domain '{0}' DNS records: {1}".format(domain, old_ip)) if public_ip != old_ip: if client.update_dns_record(domain, public_ip): logging.info("Domain '{0}' public IP set to '{1}'".format(domain, public_ip)) print ("Domain '{0}' public IP set to '{1}'".format(domain, public_ip)) else: pass # What to do here? .. else: logging.info("No update needed.") print ("No update needed.") (Can't remember where I got it from though. Sorry about that) EDIT : Found the original author and updater # Original Source: # https://saschpe.wordpress.com/2013/11/12/godaddy-dyndns-for-the-poor/ # https://github.com/observerss/pygodaddy # # Modified by Jeremy Sears (https://stackoverflow.com/users/1240482/jsears) This updates the @ A record and I have a CNAME record for the hostname directing to @
Skygod Posted January 26, 2016 Posted January 26, 2016 What I've been doing for remote storage is to connect to an nbd-server and I then mount the device with nbd-client and copy files to the filesystem then disconnect the client. Not sure if this a particularly good solution, but it's been working for me.
tkaiser Posted January 26, 2016 Posted January 26, 2016 Concerning your comment about archive, could you please explain it, as I dont think you mean TAR (which I never liked). Nope, it's just a concept based on the purpose of an 'archive' (contents shouldn't be altered!): Use the filesystem to protect the archived files. There's no need to expose everything world-writeable. Once you put files in an archive they're read-only by design. Combine this with regular snapshots and let these be pulled from a second host and you're pretty safe. But not on insufficient hardware (ECC-RAM is a prerequisite when you care about data integrity -- it's that simple but I refrain from wasting my time explaining that over and over again)
UnixOutlaw Posted November 8, 2016 Posted November 8, 2016 I've got Armbian running on a BananaPI - it cr@ps all over Bananian... I've got Armbian running on OPi+2E - love it... I've got Armbian running on one of my Pine64+ (got DietPI on the other one)... As for "backups" - I was early on frustrated by the lack of Dropbox support (in ARM world but mostly for NTC CHIP). Initially ad some rsync scripts - but it got a bit unwieldy... So - now I use "unison" - push stuff (not everything) from my Dropbox folder on an x86 Linux machine to an NTC CHIP accessible from the worldwide interwebs (and via dyndns) - I can then use unison to sync copies I have on ARM devices (running Linux) and even my ghastly Surface Pro 3 at work running the Microsoft Bash Shell (the canonical thing you get from Insiders Fast Track build). Mostly all I'm replicating is shell scripts and SSH keys... I don't think I'd use it for lots of large files (e.g. music)... However - I have noticed a few little niggly things with Unison - versions on either side need to be fairly close. e.g. I enabled Sid on my OPi+2E - and "apt-get dist-upgrade" replaces Unison 2.40 with 2.48 (and they're incompatible) - so I have to disable Sid, update, then apt-get install - but it's workable...
sooperior Posted November 8, 2016 Posted November 8, 2016 Just +1 for armbian. I used bananian for over a year and it had very few releases with very few characteristics. And if you have a look at their bug tracker right now it looks like dead. On the other hand the guys at armbian are doing a huge job, frequent releases with bugfixes (this is of paramount importance to me) and new features. My hardware is also banana pi M1 and decission for armbian is absolutely clear.
zgoda_j Posted November 8, 2016 Posted November 8, 2016 I always keep Bananian image handy for quick debugging on LeMaker BananaPro. Sometimes something does not work in Armbian and works OK in Bananian so I can boot Bananian to find working configuration. The one big difference at OS level is that Bananian does not use systemd - this makes it a bit simpler and some things like usbmuxd daemon work better with legacy kernel if launched directly by udevd not by systemd-udevd. But it's a matter of preference and what fits better your use case. For some devices I'm working on Bananian fits better, for other Armbian is at advantage.
arox Posted November 8, 2016 Posted November 8, 2016 Well, I already use arbmian, debian, ubuntu, raspbian, gentoo (and dropped crux, bananian and many others in the past - including LFS)... I use multiple boards because distribution maintainers use to make choices that that hinder me to have some parts working. I am not happy with that, having to work with a different package manager or init system on every node, but I don't know how to maintain my tools when maintainers decide to change important parts of the system (and choose unstable new ones) and not to support old versions and configurations ...
FrankM Posted May 7, 2017 Posted May 7, 2017 If needed, Bananian has got new kernel updates (3.4.113 & 4.4.66)
Igor Posted May 7, 2017 Posted May 7, 2017 35 minutes ago, FrankM said: If needed, Bananian has got new kernel updates (3.4.113 & 4.4.66) https://github.com/armbian/build/issues/648
Recommended Posts