When testing the new, natively implemented ZFS on Linux on Ubuntu 16.04, and following some older tutorials, you might get stuck on boot at the following point:
1 |
a start job is running for import zfs pools by cache file |
The reaons for this is the paralellized boot-sequence of systemd. ZFSonLinux has it’s own config, on which you can decide to mount ZFS-Pools on boot, which is exactly what kicks in here. Fixing this is easy when you know how to…
- Boot up rescue, live image, a kernel-config with upstart..
Now you can at least edit config files in the system. Luckily the default grub-config for xenial offers some options to boot the same kernel either with systemd or upstart. what you should do, is disabling the „zfs-mount.target“-target of systemd by typing
1 |
# systemctl disable zfs-mount.target |
as root. Alternatively you can remove the regarding symlink manually.
- Reboot to default Kernel
If the above Problem was true for you, you shall be able to boot up normally now. If you’re close to a heart-attack, because „zpool status“ offers no known pools, don’t worry: Due to the fact, that the kernel-module for zfs is not loaded right now, an auto-import of the pool had also not happen, but it is – of course – still there, but we shall complete our task first:
- Adjust config files
At first, you shall re-enable the zfs-mount-systemd-target by typing
1 |
# systemctl enable zfs-mount.target |
after that, have a look onto /etc/default/zfs. You shall disable ZFS_MOUNT and ZFS_SHARE. Even if you want this, you shall disable this here, because it tries to use own mechanics to get the mounts done. Leave it disabled here, and let systemd handle the mount.
The proper edited config-file should be look like this in the end:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
server:~# cat /etc/default/zfs # ZoL userland configuration. # Wait this many seconds during system start for pool member devices to appear # before attempting import and starting mountall. ZFS_AUTOIMPORT_TIMEOUT='30' ZFS_MOUNT='no' # Run `zfs share -a` during system start? # nb: The shareiscsi, sharenfs, and sharesmb dataset properties. ZFS_SHARE='no' # Run `zfs unshare -a` during system stop? ZFS_UNSHARE='no' # Build kernel modules with the --enable-debug switch? ZFS_DKMS_ENABLE_DEBUG='no' # Build kernel modules with the --enable-debug-dmu-tx switch? ZFS_DKMS_ENABLE_DEBUG_DMU_TX='no' # Keep debugging symbols in kernel modules? ZFS_DKMS_DISABLE_STRIP='no' |
- reboot
Congratulations, everything shall be fine now, and the pool and it’s vdevs and zvols should have been autoimported and mounted. If not (e.G. because you plugged in an USB-Device, and the device numbering has changed), try zpool import <poolname>. As far as i have recognized, the pool does not autoimport, when devices (or their numbering have changed, so remove your USB-devices, reboot, do an zpool import and reboot again to see if it gets imported this time.
Sometimes the zpool is still not mounted on boot. If you encounter this problem also, you can add something like this
1 |
sleep 10; zpool import tank |
before the „exit 0;“ line in your /etc/rc.local .
A last tip in the end: if your Pool looks degraded after importing it, do not try fiddling with remove and re-adding your devices. Better consider an instant reboot and see if the Pool is healthy again. This problem seem only to occur, when you have added the devices by its Path like /sdc or /dev/sdc instead with it’s UUID. You cannot be entirely sure in which order the Linux-Kernel initializes the devices, so maybe it has just swapped devices, or a device came up with some delay. A reboot helps in most of that cases, because unlike other Raid-Systems only the few bits which have changed until then have to be re-synchronized („resilvered“) instead of re-syncing the whole device. But when you have manually removed a disk, then it also has to be resilvered completely.
-zeus
Pingback: Where’d my zpool data go? - Boot Panic
Pingback: Resolved: Where'd my zpool data go? - Resolved Problem