The 768MB arc_max was for a vmalloc of 1024MB and I was using zfs 0.7.12 when I wrote that, but since then I have switched to 0.8 (with Thumb2 patches from github). I have looked at this again (now with 0.8), and it seems the situation has indeed changed. When looking at vmalloc of zfs (spl actually) via /proc/vmallocinfo it seems to hover around 100-150MB with about 220MB being the highest I've seen and also the total address space occupied by all allocations has always been under 300MB and usually far below. So it seems rather compact with only little fragmentation happening. So zfs appears to use less vmalloc than I assumed and more lowmem, so I will have to revise my settings, because they clearly don't make sense anymore.
The problem with ZFS on Helios4 is not so much the memory (2GB is plenty) but the limited address space, in particular the in-kernel virtual address space which ZFS uses via vmalloc (which is the Solaris way to do it not the linux way). Due to normal usage and address space fragmentation thereof, the address space can get exhausted and the system will simply lock up. It's not an issue on 64bit systems but on 32bit systems with vanilla settings this happens rather quickly. So you want to configure stuff to fight this. The absolute bare minimum is to increase the size of the kernel address space allocated to vmalloc via the kernel command line. With the default 1G/3G split between kernel and user space (CONFIG_VMSPLIT_3G) you can pass in vmalloc=512M or even vmalloc=768M to the kernel. Even with this setting and nightly reboots I've been experiencing the occasional lock-up. So better still, you build your own kernel with CONFIG_VMSPLIT_2G (or even CONFIG_VMSPLIT_1G if you really want to dedicate the whole device to storage) to increase the kernel address space at which point you can increase vmalloc further to 50-75% of the kernel space. This is probably the settings that helps the most: my Helios4 has been running stable for a couple of months without scheduled reboots. For configuring the zfs module I put the following into /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=805306368 options zfs zfs_arc_meta_limit_percent=90 options zfs zfs_arc_meta_min=33554432 options zfs zfs_arc_min=134217728 options zfs zfs_dirty_data_max_percent=20 options zfs zfs_dirty_data_max_max_percent=30 options zfs zfs_dirty_data_sync=16777216 options zfs zfs_txg_timeout=2 I don't quite remember my rationale behind these but you can read up on them here https://github.com/zfsonlinux/zfs/wiki/ZFS-on-Linux-Module-Parameters . You probably also don't want to go crazy on the recsize of your datasets. Disclaimer: I'm not and expert, so take my explanation and also the settings with a grain of salt.