[zfs-discuss] I allowed Ubuntu 14.04 to install some updates and now for some reason my zfs pool won't mount.

Joe Goldthwaite joe at goldthwaites.com
Wed Dec 17 10:01:12 EST 2014


Hi Michael,

Thanks for the help.  I'm not really comfortable with the way *nix systems
mount drives.  It would make more sense to me if you told it to mount a
device at a specific point and then it would show up. It seems like you've
got to create an empty folder first and then mount the device there. That
means the directory exists whether it's mounted or not. I really don't like
that.  I guess I'm old and crotchety.

Anyway, in this case I just mounted my pool, mediapool, at the root.  I
then created the two file systems. I didn't explicitly tell them to be
mounted.  I assumed that mounting the pool would make any file systems in
the pool visible and available.

>does the system have ECC RAM?

Yes.  That was one of the reasons for upgrading my old system.

>Are the "used" and "refer" figures for the file systems in the ballpark,
for a start?

Yes.  I've got nine 3TB drives in a raid-Z2 layout which I thought would
give me 3*7 or 21 TB. It says I've got 18TB. I'm guessing that the
difference between TB and actual bytes accounts for the missing 2.67 TB

>And all three file systems are set to mount at /mediapool, which isn't
going to work very well.

Like I mentioned, I didn't explicitly tell the file systems where to mount,
I just created them and they showed up under the mediapool which is what I
would expect them to do.  Why would this not work?  More importantly, why
did it work before I applied the updates?

>I would like to see the output of "zfs get mounted,mountpoint mediapool
-r".

The server is at home but I'll run the command and post the results this
evening.

Thanks for your other suggestions.  I had run into the issue of the device
names changing before. On my old system I had done the import by-id.  On
the old system I had a combination of motherboard ports, pci adapters and
external SATA ports.  It seemed like the system would arbitrarily assign
different device letters even if there were no changes.  On my new system,
all the ports are on the main board and there's only a single expansion
slot so I'm not as worried about the device names.  I do intend to do that
though at some point.  I've been trying to set up a mirror with a FreeNas
box and I didn't want to do the export/import until everything was backed
up.

I'll try your other suggestions tonight and let you know how it worked
tomorrow.

Thanks again!


On Wed, Dec 17, 2014 at 1:49 AM, Michael Kjörling <michael at kjorling.se>
wrote:
>
> First off, please do be careful with terminology. You "import" and
> "export" a pool, but "mount" and "unmount" file systems within that
> pool. Many times the two operations occur simultaneously, but there is
> no requirement that they do. The pool itself doesn't have any mount
> point associated with it, but the root file system on the pool (which
> confusingly enough has exactly the same name as the pool it resides
> on) does have a mountpoint and can be mounted.
>
> And, the obligatory question in case of ZFS weirdness: does the system
> have ECC RAM? (I don't think that's the problem in your case, but it's
> always good to cover all bases, and I didn't see you mentioning ECC or
> non-ECC in your question.)
>
>
> On 16 Dec 2014 23:10 -0700, from joe at goldthwaites.com (Joe Goldthwaite):
> > I've tried rebooting multiple times but it doesn't seem like it's going
> to
> > start working. Anyone have any ideas of what I can try?
> >
> > root at mediaserver2:/# zfs list
> > NAME                USED  AVAIL  REFER  MOUNTPOINT
> > mediapool          11.0T  7.33T  11.0T  /mediapool
> > mediapool/backups   311K  7.33T   311K  /mediapool
> > mediapool/media     311K  7.33T   311K  /mediapool
>
> There is some major weirdness going on here. Are the "used" and
> "refer" figures for the file systems in the ballpark, for a start? And
> all three file systems are set to mount at /mediapool, which isn't
> going to work very well.
>
> I would like to see the output of "zfs get mounted,mountpoint
> mediapool -r". Without that, I'm going to make a somewhat educated
> guess as to what's going on.
>
> If "mediapool/backups" and "mediapool/media" indeed does have their
> mountpoint set to /mediapool explicitly (which it looks like from the
> output you have provided, and which the "zfs get" output would
> confirm), try clearing that and see if that clears up the problem:
>
> # zfs inherit mountpoint mediapool/backups mediapool/media
>
> The above should put the filesystem mountpoints at /mediapool/backups
> and /mediapool/media, respectively, by clearing any explicitly set
> mountpoint properties on the two and allowing the parent mount point
> setting to be inherited.
>
> Because of the mount point directory weirdness, you may need to do a
> "zfs umount -a" followed by "zfs mount -a" after the above (both as
> root). If this clears up the problem, the fix should persist normally
> across reboots. If it still doesn't quite work out, check to make sure
> that /mediapool is empty as it exists on the file system hierarchy
> root file system.
>
>
> > This is the zpool configuration;
> >
> > root at mediaserver2:/# zpool status
> >   pool: mediapool
> >  state: ONLINE
>
> So the pool itself is just fine. That's a good sign. Don't do anything
> hasty and we should be able to recover from this just fine with no
> data loss.
>
> >   scan: resilvered 18.0G in 0h5m with 0 errors on Fri Nov 14 22:07:00
> 2014
>
> If you haven't already, once this has been cleared up, I would suggest
> that you schedule regular scrubs of your pool to make sure it remains
> healthy. Consider
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pools
> :
>
> "Run zpool scrub on a regular basis to identify data integrity
> problems. If you have consumer-quality drives, consider a weekly
> scrubbing schedule. If you have datacenter-quality drives, consider a
> monthly scrubbing schedule. You should also run a scrub prior to
> replacing devices or temporarily reducing a pool's redundancy to
> ensure that all devices are currently operational."
>
> > config:
> >
> >         NAME        STATE     READ WRITE CKSUM
> >         mediapool   ONLINE       0     0     0
> >           raidz2-0  ONLINE       0     0     0
> >             sdf     ONLINE       0     0     0
> >             sda     ONLINE       0     0     0
> >             sdb     ONLINE       0     0     0
> >             sdc     ONLINE       0     0     0
> >             sdd     ONLINE       0     0     0
> >             sdg     ONLINE       0     0     0
> >             sdh     ONLINE       0     0     0
> >             sdi     ONLINE       0     0     0
> >             sdj     ONLINE       0     0     0
>
> Consider migrating your pool to persistent device identifiers. It
> should be enough to just "zpool export mediapool" followed by "zpool
> import mediapool -d /dev/disk/by-id" (or whichever /dev/disk/by-* that
> you prefer). /dev/sd* names are dependent on detection order,
> including connecting or disconnecting disks while the system is
> running (think USB or eSATA mass storage devices).
>
>
> > errors: No known data errors
> > root at mediaserver2:/#
>
> --
> Michael Kjörling • https://michael.kjorling.semichael at kjorling.se
> OpenPGP B501AC6429EF4514 https://michael.kjorling.se/public-keys/pgp
>                  “People who think they know everything really annoy
>                  those of us who know we don’t.” (Bjarne Stroustrup)
>
> To unsubscribe from this group and stop receiving emails from it, send an
> email to zfs-discuss+unsubscribe at zfsonlinux.org.
>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20141217/c50a4cc4/attachment.html>


More information about the zfs-discuss mailing list