Jump to content

[TUTORIAL] First steps with Helios64 (ZFS install & config)


michabbs

Recommended Posts

Focal / root on eMMC / ZFS on hdd / LXD / Docker

 

I received my Helios64 yesterday, installed the system, and decided to write down my steps before I forget them. Maybe someone will be interested. :-)

 

Preparation:

  • Assembly your box as described here.
  • Download Armbian Focal image from here and flash it to SD card. You may use Ether.
  • Insert the SD card to your Helios64 and boot.
  • After 15-20s the box should be accessible via ssh. (Of course you have to find out it's IP address somehow. For example check your router logs or use this.)

 

First login:

ssh root@IP
Password: 1234

 

After prompt - change password and create your daily user. You should never login as root again. Just use sudo in the future. :-)

Note: The auto-generated user is member of "disk" group. I do not like it. You may remove it so: "gpasswd -d user disk".

 

Now move your system to eMMC:

apt update
apt upgrade
armbian-config
# Go to: System -> Install -> Install to/update boot loader -> Install/Update the bootloader on SD/eMMC -> Boot from eMMC / system on eMMC

 

You can choose root filesystem. I have chosen ext4. Possibly f2fs might be a better idea, but I have not tested it.

When finished - power off, eject the sd card, power on. 

Your system should now boot from eMMC.

 

If you want to change network configuration (for example set static IP) use this: "sudo nmtui".

You should also change the hostname:

sudo armbian-config
# Go to: Personal -> Hostname

 

 

ZFS on hard disk:

sudo armbian-config
# Go to Software and install headers.
sudo apt install zfs-dkms zfsutils-linux
# Optional:
sudo apt install zfs-auto-snapshot
# reboot

 

Prepare necessary partitions - for example using fdisk or gdisk.

Create your zfs pool. More or less this way:

sudo zpool create -o ashift=12 -m /mypool -mypool mirror /dev/disk/by-partuuid/abc123 /dev/disk/by-partuuid/xyz789

 

Reboot and make sure the pool is imported automatically. (For example by typing "zpool status".)

You should now have working system with root on eMMC and ZFS pool on HDD.

 

Docker with ZFS:

Prepare the filesystem:

sudo zfs create -o mountpoint=/var/lib/docker mypool/docker-root
sudo zfs create -o mountpoint=/var/lib/docker/volumes mypool/docker-volumes
sudo chmod 700 /var/lib/docker/volumes

# Option: If you use zfs-auto-snapshot, you might want to consider this:
sudo zfs set com.sun:auto-snapshot=false mypool/docker-root
sudo zfs set com.sun:auto-snapshot=true mypool/docker-volumes

 

Create /etc/docker/daemon.json with the following content:

{
  "storage-driver": "zfs"
}

 

Add /etc/apt/sources.list.d/docker.list with the following content:

deb [arch=arm64] https://download.docker.com/linux/ubuntu focal stable
# deb-src [arch=arm64] https://download.docker.com/linux/ubuntu focal stable

 

Install Docker:

sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io

#You might want this:
sudo usermod -aG docker your-user

 

 

Voila! Your Docker should be ready! Test it: "docker run hello-world".

 

Option: Install Portainer:

sudo zfs create rpool/docker-volumes/portainer_data
# You might omit the above line if you do not want to have separate dataset for the docker volume (bad idea).

docker volume create portainer_data
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

 

Go to http://yourip:9000 and configure.

 

 

LXD with ZFS:

 

sudo zfs create -o mountpoint=none mypool/lxd-pool
sudo apt install lxd
sudo lxc init

# Configure ZFS this way:
Do you want to configure a new storage pool (yes/no) [default=yes]?  yes
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, btrfs, ceph, lvm, zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? no
Name of the existing ZFS pool or dataset: mypool/lxd-pool
[...]

#You might want this:
sudo usermod -aG lxd your-user

# Option: If you use zfs-auto-snapshot, you might want to consider this:
sudo zfs set com.sun:auto-snapshot=false mypool/lxd-pool
sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/containers
sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/custom
sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/virtual-machines

 

That's it. Lxd should work now on ZFS. :-)

 

Link to comment
Share on other sites

Hello @michabbs,

Thanks for that wonderful tutorial. It makes me contemplate how noob I am haha.
I have a hard time figuring out what this command does and what partition I would want to create beforehand:

Quote

Prepare necessary partitions - for example using fdisk or gdisk.

Create your zfs pool. More or less this way:


sudo zpool create -o ashift=12 -m /mypool -mypool mirror /dev/disk/by-partuuid/abc123 /dev/disk/by-partuuid/xyz789

 


I'm creating my ZFS using only

zpool create RAIDZ2 /tank sda sdb...

and then the datasets with

zfs create /tank/<stuff>

I guess you are mirroring instead of creating a raidz. And also you are specifying the drives by id instead of path. What are the ups and downs of both methods?

Second question: I see you are not using the docker packages available by default. What is wrong with the original? Outdated?

Link to comment
Share on other sites

13 hours ago, tionebrr said:

I have a hard time figuring out what this command does and what partition I would want to create beforehand:

Quote

Prepare necessary partitions - for example using fdisk or gdisk.

 

Well, generally it is good idea to partition your disk before you use it. :-)

For example like this:

# gdisk -l /dev/sda
GPT fdisk (gdisk) version 1.0.5

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 3907029168 sectors, 1.8 TiB
Model: WDC XXX
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): XXX
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         1050623   512.0 MiB   EF00  EFI System
   2         1050624      3907029134   1.8 TiB     BF01  ZFS draalpool

 

I use only one partition (here: sda2) for ZFS, and I may use another ones (here: sda1) for another things. It is usually bad idea to use the whole disk for ZFS, because when you take the disk out and put it into another computer it is not guarantied that it will not try to "repair" the partition table (which you do not have...).

In the above example I created the EFI partition just in case if I needed to move this disk to another computer  (in an unknown future...) and do something (no matter what...). Most likely it will never be used for anything. But it is there - just as an option. :-)

Anyway - if you already created zfs on the whole disk - it should work just fine.

 

13 hours ago, tionebrr said:

I guess you are mirroring instead of creating a raidz.

Yes, it was just an example. There are plenty of ways you could create zfs pool. Generally 3-disks RAIDZ and 2-disks MIRROR are the most common.

 

13 hours ago, tionebrr said:

And also you are specifying the drives by id instead of path. What are the ups and downs of both methods?

In ZFS you should never specify disks (or partitions...) by /dev/sdXY path.  The path is not guaranteed to be consistent between reboots. Actually it is guaranteed to be inconsistent when you remove a disk or swap disks. Your pool might fail to import after reboot if the disks have different paths assigned. Always use uuids!

 

14 hours ago, tionebrr said:

I see you are not using the docker packages available by default. What is wrong with the original? Outdated?

Possibly nothing is wrong with them. I always use official Docker repository because they support Ubuntu and provide with the newest stable version.

 

Link to comment
Share on other sites

  • gprovost changed the title to [TUTORIAL] First steps with Helios64 (ZFS install & config)

Thanks a lot @michabbs. Great informations there. I'm learning a lot.

I think I've found a simpler way to create the pool without having to copy/past uuids.
You can create it using /dev/sdX; then you export it and import back while forcing zpool import to look in the /by-partuuid folder.
One more thing, it's looking like it can even be done such as the disk are identified by ata-MANUFACTURER_REF_SERIALNUMBER.
To do this I had to sudo rm /dev/disk/by-id/wwn-* so that zpool only find ata-* devices when reimporting the pool.

I made the transition several times without problems now, and the only issue I stumbled upon is at export. If some services are already using directories in the pool, you need to shut down those in order to export the pool cleanly.

There is a PR in the Kobol wiki.
Cheers

Link to comment
Share on other sites

@tionebrr

Well... Yes, of course you may use /by-id instead of /by-partuuid. The clou is to use any "permanent" id instead of /dev/sdX. :-)

By the way: I never had an idea to rm anything is /dev... It sounds dangerous.

Edited by michabbs
Link to comment
Share on other sites

Yeah I agree... it looks scary. But those files are only symlinks actually. The true block devices are located at /dev/sdX#. I wonder what it does if you rm them... I guess they are just references but would linux actually accept to remove one?
Edit: I tried and yes you can also rm the /dev/sdX. It doesn't looks like it is affecting the ZFS pool at all. I'm doing a scrub before rebooting and will edit again later.
Edit edit: Wait, I didn't actually rm-ed the zfs partitions from /dev. I just removed sda.
Edit edit edit: yup I can still write to my pool even after doing rm /dev/sda*. So I don't know really. I guess those files are just handles created at boot time or when a block device gets enumerated. And everything is back after a reboot.


I don't know why there is not a way to do `zpool import -d /dev/disk/by-id/ata-*`. That would be perfect.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines