Jump to content

CUBIETRUCK: Best backup strategy of system (rootfs) sdcard & SSD?


martin-by

Recommended Posts

Hello,

 

now my system is running well on my Cubietruck:

(ARMBIAN Ubuntu 16.04.1 LTS 4.8.1-sunxi)

  • Booting from SDCard
  • rootfs on SSD (installed via nand-sata-install script)

Whereas a image backup of SDCard is easy, how to backup my rootfs on SSD for system fallback Szenario:

 

 

Is there any good script to backup rootfs of SSD with shrinking function to get small backup files?

  • simply TAR of "/" of SSD?
  • Use of webmin backup functionality (howto store this backup on network drive outside my Cubietruck)?

Best Regards,

 

Martin

 

Link to comment
Share on other sites

Armbian & Khadas are rewarding contributors

Hi Martin,

 

I wouldn't say the following is the "best" backup strategy, but I'm using it for quite some time already and it has not failed me (yet).

 

I'm using the following script to create a dump of my NAND. You will probably need to adapt the script, but it could serve you as a good starting point. Note that the script below is prepared for a second device and also for compressing the backup, but both is not done in the version below. It should be easy enough to enable it if required:

#!/bin/bash

NETWORKPATH="//server/share" # some CIFS share
NETWORKUSER="username"
NETWORKPASS="password"
MOUNTPATH="/mnt/backup"

# Setting up directories
SUBDIR=cubie_backups
DIR=$MOUNTPATH/$SUBDIR

echo "Starting backup process!"

# First check if pv package is installed, if not, install it first
PACKAGESTATUS=$(dpkg -s pv | grep Status);

if [[ $PACKAGESTATUS == S* ]]
	then
		echo "Package 'pv' is installed."
	else
		echo "Package 'pv' is NOT installed."
		echo "Installing package 'pv'. Please wait..."
		apt-get -y install pv
fi

# Mount network share
mount -t cifs $NETWORKPATH $MOUNTPATH -o user=$NETWORKUSER,password=$NETWORKPASS
if [ $? != 0 ]; then
	echo "Could not mount network share";
	exit;
fi

# Check if backup directory exists
if [ ! -d "$DIR" ];
	then
		echo "Backup directory $DIR doesn't exist, creating it now!"
		mkdir $DIR
fi

# Create a filename with datestamp for our current backup (without .img suffix)
OFILE="$DIR/backup_$(date +%Y%m%d_%H%M%S)"
OFILE2="$DIR/backup_mmc_$(date +%Y%m%d_%H%M%S)"

# Create final filename, with suffix
OFILEFINAL=$OFILE.img
OFILEFINAL2=$OFILE2.img

# First sync disks
sync; sync

# Shut down some services before starting backup process
#echo "Stopping some services before backup."
#service apache2 stop
#service mysql stop
#service cron stop

# Begin the backup process, should take about 1 hour from 8Gb SD card to HDD
echo "Backing up internal NAND to fileserver"
echo "This will take some time depending on your SD card size and read performance. Please wait..."

SDSIZE=$(blockdev --getsize64 /dev/nand);
pv -tpreba /dev/nand -s $SDSIZE | dd of=$OFILE bs=1M conv=sync,noerror iflag=fullblock

# Wait for DD to finish and catch result
RESULT=$?

#SDSIZE2=$(blockdev --getsize64 /dev/mmcblk0);
#pv -tpreba /dev/mmcblk0 -s $SDSIZE2 | dd of=$OFILE2 bs=1M conv=sync,noerror iflag=fullblock

# Wait for DD to finish and catch result
#RESULT=$(( $? | $RESULT ))

# Start services again that where shutdown before backup process
#echo "Start the stopped services again."
#service apache2 start
#service mysql start
#service cron start

# If command has completed successfully, delete previous backups and exit
if [ $RESULT = 0 ];
	then
		echo "Successful backup."
		#echo "Successful backup, previous backup files will be deleted."
		#rm -f $DIR/backup_*.tar.gz
		mv $OFILE $OFILEFINAL
		#mv $OFILE2 $OFILEFINAL2
		#echo "Backup is being tarred. Please wait..."
		#tar zcf $OFILEFINAL.tar.gz $OFILEFINAL
		#rm -rf $OFILEFINAL
		echo "Backup process completed! FILE: $OFILEFINAL.tar.gz"
		umount $MOUNTPATH
		exit 0
# Else remove attempted backup file
	else
		echo "Backup failed! Previous backup files untouched."
		echo "Please check there is sufficient space on the HDD."
		rm -f $OFILE
		#rm -f $OFILE2
		echo "Backup process failed!"
		umount $MOUNTPATH
		exit 1
fi

I can't take all the credit for the above, since I have started from someone else's backup script. I cannot recall who it was though.

 

Greetings

Hanjo

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines