[zfs-discuss] ZFS 0.7.4 not mounting on Ubuntu 16.04.3

Hetz Ben Hamo hetz at hetz.biz
Thu Dec 14 15:20:32 EST 2017


Then do: systemctl enable zfs-mount
And it should auto mount on boot.

Hetz

On Thu, Dec 14, 2017 at 10:18 PM, Nathan Fish via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> zfs-import doesn't seem to exist here:
>
> root at mc-3015-202:~# systemctl start zfs-import
> Failed to start zfs-import.service: Unit zfs-import.service not found.
>
> But looking around with tab-complete:
> root at mc-3015-202:~# systemctl status zfs<TAB>
> zfs-import-cache.service  zfs-import-scan.service   zfs-import.target
>        zfs-mount.service         zfs-share.service         zfs.target
>               zfs-zed.service
>
> root at mc-3015-202:~# systemctl status zfs-mount.service
> ● zfs-mount.service - Mount ZFS filesystems
>    Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled;
> vendor preset: enabled)
>    Active: failed (Result: exit-code) since Thu 2017-12-14 14:01:19
> EST; 59min ago
>      Docs: man:zfs(8)
>  Main PID: 1045 (code=exited, status=1/FAILURE)
>
> Dec 14 14:01:19 mc-3015-202 systemd[1]: Starting Mount ZFS filesystems...
> Dec 14 14:01:19 mc-3015-202 zfs[1045]: The ZFS modules are not loaded.
> Dec 14 14:01:19 mc-3015-202 zfs[1045]: Try running '/sbin/modprobe
> zfs' as root to load them.
> Dec 14 14:01:19 mc-3015-202 systemd[1]: zfs-mount.service: Main
> process exited, code=exited, status=1/FAILURE
> Dec 14 14:01:19 mc-3015-202 systemd[1]: Failed to start Mount ZFS
> filesystems.
> Dec 14 14:01:19 mc-3015-202 systemd[1]: zfs-mount.service: Unit
> entered failed state.
> Dec 14 14:01:19 mc-3015-202 systemd[1]: zfs-mount.service: Failed with
> result 'exit-code'.
> root at mc-3015-202:~# systemctl start zfs-mount.service
> root at mc-3015-202:~# systemctl status zfs-mount.service
> ● zfs-mount.service - Mount ZFS filesystems
>    Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled;
> vendor preset: enabled)
>    Active: active (exited) since Thu 2017-12-14 15:01:16 EST; 1s ago
>      Docs: man:zfs(8)
>   Process: 8821 ExecStart=/sbin/zfs mount -a (code=exited,
> status=0/SUCCESS)
>  Main PID: 8821 (code=exited, status=0/SUCCESS)
>
> Dec 14 15:01:16 mc-3015-202 systemd[1]: Starting Mount ZFS filesystems...
> Dec 14 15:01:16 mc-3015-202 systemd[1]: Started Mount ZFS filesystems.
>
> The filesystems are mounted and Ceph is recovering.
>
> root at mc-3015-202:~# df -h | grep pool
> Filesystem               1K-blocks      Used   Available Use% Mounted on
> mc-3015-202-pool       27815707392       256 27815707136   1%
> /mc-3015-202-pool
> mc-3015-202-pool/osd-4  6442450944 648947840  5793503104  11%
> /var/lib/ceph/osd/ceph-4
> mc-3015-202-pool/osd-5  6442450944 581824640  5860626304  10%
> /var/lib/ceph/osd/ceph-5
> mc-3015-202-pool/osd-6  6442450944 663198720  5779252224  11%
> /var/lib/ceph/osd/ceph-6
> mc-3015-202-pool/osd-7  6442450944 424798080  6017652864   7%
> /var/lib/ceph/osd/ceph-7
>
> So thanks, that fixed it for now.  Do you know why the modules don't
> load on boot?  Some sort of systemd race condition?
>
> On Thu, Dec 14, 2017 at 2:52 PM, Hetz Ben Hamo <hetz at hetz.biz> wrote:
> > Run: systemctl start zfs-import
> >
> > If this works, run: systemctl enable zfs-import
> > This should do auto mount upon reboot
> >
> > Hetz
> >
> > On Dec 14, 2017 21:49, "Nathan Fish via zfs-discuss"
> > <zfs-discuss at list.zfsonlinux.org> wrote:
> >
> > When updating a storage server to ZFS 0.7.4 from ppa and rebooting,
> > ZFS filesystems do not mount.  'zfs list' and 'zpool status' appear fine.
> >
> > System information
> >
> > Type                                | Version/Name
> > Distribution Name       | Ubuntu
> > Distribution Version    | 16.04.3
> > Linux Kernel                 | 4.4.0-103-generic
> > Architecture                 | x86_64
> > ZFS Version                  | 0.7.4-0york1~16.04
> > SPL Version                  | 0.7.4-0york1~16.04
> >
> > root at mc-3015-202:~# zfs list
> > NAME                     USED  AVAIL  REFER  MOUNTPOINT
> > mc-3015-202-pool        2.16T  25.9T   192K  /mc-3015-202-pool
> > mc-3015-202-pool/osd-4   619G  5.40T   619G  /var/lib/ceph/osd/ceph-4
> > mc-3015-202-pool/osd-5   555G  5.46T   555G  /var/lib/ceph/osd/ceph-5
> > mc-3015-202-pool/osd-6   632G  5.38T   632G  /var/lib/ceph/osd/ceph-6
> > mc-3015-202-pool/osd-7   405G  5.60T   405G  /var/lib/ceph/osd/ceph-7
> > root at mc-3015-202:~# zpool status
> >   pool: mc-3015-202-pool
> >  state: ONLINE
> >   scan: scrub repaired 0B in 3h44m with 0 errors on Sun Dec 10 04:08:18
> 2017
> > config:
> >
> >         NAME
> >      STATE     READ WRITE CKSUM
> >         mc-3015-202-pool
> >      ONLINE       0     0     0
> >           raidz2-0
> >      ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy2-
> lun-0-encrypt
> >   ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy3-
> lun-0-encrypt
> >   ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy4-
> lun-0-encrypt
> >   ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy5-
> lun-0-encrypt
> >   ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy6-
> lun-0-encrypt
> >   ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy7-
> lun-0-encrypt
> >   ONLINE       0     0     0
> >           raidz2-1
> >      ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy8-
> lun-0-encrypt
> >   ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy9-
> lun-0-encrypt
> >   ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy10-
> lun-0-encrypt
> >  ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy11-
> lun-0-encrypt
> >  ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy16-
> lun-0-encrypt
> >  ONLINE       0     0     0
> >             pci-0000:04:00.0-sas-exp0x500065b36789abff-phy17-
> lun-0-encrypt
> >  ONLINE       0     0     0
> >
> > errors: No known data errors
> > root at mc-3015-202:~#
> >
> >
> > root at mc-3015-202:~# df -h
> > Filesystem      Size  Used Avail Use% Mounted on
> > udev             32G     0   32G   0% /dev
> > tmpfs           6.3G  9.1M  6.3G   1% /run
> > /dev/md0        235G  4.6G  218G   3% /
> > tmpfs            32G   12K   32G   1% /dev/shm
> > tmpfs           5.0M     0  5.0M   0% /run/lock
> > tmpfs            32G     0   32G   0% /sys/fs/cgroup
> > root at mc-3015-202:~#
> >
> > root at mc-3015-202:~# mount | grep zfs
> > root at mc-3015-202:~#
> > root at mc-3015-202:~# zpool import
> > no pools available to import
> >
> > root at mc-3015-202:~# cat
> > /etc/apt/sources.list.d/jonathonf-ubuntu-zfs-xenial.list
> > deb http://ppa.launchpad.net/jonathonf/zfs/ubuntu xenial main
> > # deb-src http://ppa.launchpad.net/jonathonf/zfs/ubuntu xenial main
> >
> > The only entry in dmesg about ZFS is:
> > [   26.345795] ZFS: Loaded module v0.7.4-0york1~16.04, ZFS pool
> > version 5000, ZFS filesystem version 5
> >
> > /var/log/syslog:
> > Dec 14 10:40:16 mc-3015-202 zfs[1140]: The ZFS modules are not loaded.
> > Dec 14 10:40:16 mc-3015-202 zfs[1140]: Try running '/sbin/modprobe
> > zfs' as root to load them.
> > Dec 14 10:40:16 mc-3015-202 systemd[1]: zfs-mount.service: Main
> > process exited, code=exited, status=1/FAILURE
> > Dec 14 10:40:16 mc-3015-202 systemd[1]: Failed to start Mount ZFS
> > filesystems.
> > Dec 14 10:40:16 mc-3015-202 systemd[1]: zfs-mount.service: Unit
> > entered failed state.
> > Dec 14 10:40:16 mc-3015-202 systemd[1]: zfs-mount.service: Failed with
> > result 'exit-code'.
> >
> > But it is loaded:
> > root at mc-3015-202:~# lsmod | grep zfs
> > zfs                  3518464  3
> > zunicode              331776  1 zfs
> > icp                   258048  1 zfs
> > zcommon                73728  1 zfs
> > znvpair                90112  2 zfs,zcommon
> > spl                   106496  4 icp,zfs,zcommon,znvpair
> > zavl                   16384  1 zfs
> >
> >
> > Rebooting to kernel 4.4.0-101-generic did not help.  Should I attempt
> > to roll back to ZFS 0.7.3? It doesn't seem to be in the PPA anymore.
> > By the way, we are running from PPA because 0.7 is needed in order for
> > Ceph to work properly on top of ZFS.  The storage devices in this case
> > are dmcrypt devices in /dev/mapper, thus the odd 'zpool status'
> > output, but are 6TB SAS HDDs underneath.
> >
> > Thank you for your time.
> >
> > Nathan Fish
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss at list.zfsonlinux.org
> > http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
> >
> >
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20171214/02ca74f8/attachment.html>


More information about the zfs-discuss mailing list