[zfs-discuss] ZFS pool won't automount, but can destroy & import it OK

LosingMyZFS zfs at thekorn.net
Wed May 8 04:16:55 EDT 2013


New to ZFS user here, so it's quite likely I'm doing something wrong, but 
for the life of me can't figure out what.



Brand-new install of (k)Ubuntu 12.04.2 LTS.  System is booting off of a ZFS 
root partition called root_pool, which despite its name lives on a single 
SSD.  (That pool was made by following the instructions verbatim at 
https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem . 
Good stuff!)

I'm not having any problems whatsoever with root_pool.  It's fine and dandy, 
system boots up every time, happy as a clam.  (yay!)


My problem is with my *other* ZFS pool.  It's a raidz1 pool made of 5X 4TB 
drives.   (/dev/sda through /dev/sde, though I created the pool specifying 
disks with the /dev/disk/by-id/scsiXXXXX nomenclature.  I'm using entire 
disks with this pool.)

Whenever I boot the machine, that second pool refuses to automount:


root at kubuntu:/home/vince# zpool status
  pool: data_pool
 state: UNAVAIL
status: One or more devices could not be used because the label is missing
        or invalid.  There are insufficient replicas for the pool to 
continue
        functioning.
action: Destroy and re-create the pool from
        a backup source.
   see: http://zfsonlinux.org/msg/ZFS-8000-5E
  scan: none requested
config:

        NAME                                    STATE     READ WRITE CKSUM
        data_pool                               UNAVAIL      0     0     0 
insufficient replicas
          raidz1-0                              UNAVAIL      0     0     0 
insufficient replicas
            scsi-SATA_ST4000DM000-1F2_Z3006LWV  FAULTED      0     0     0 
corrupted data
            scsi-SATA_ST4000DM000-1F2_Z3006R54  FAULTED      0     0     0 
corrupted data
            scsi-SATA_ST4000DM000-1F2_Z3006G8C  FAULTED      0     0     0 
corrupted data
            scsi-SATA_ST4000DM000-1F2_Z3006LQ8  FAULTED      0     0     0 
corrupted data
            scsi-SATA_ST4000DM000-1F2_Z3006MDX  FAULTED      0     0     0 
corrupted data

  pool: root_pool
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Mon May  6 11:07:30 2013
config:

        NAME                                           STATE     READ WRITE 
CKSUM
        root_pool                                      ONLINE       0     0 
0
          scsi-SATA_SanDisk_SDSSDP0130802400920-part3  ONLINE       0     0 
0

errors: No known data errors

....but if I destroy data_pool then turn around and immediately import it, 
it imports just fine!



root at kubuntu:/home/vince# zpool destroy data_pool
root at kubuntu:/home/vince# zpool import data_pool
root at kubuntu:/home/vince# zpool status
  pool: data_pool
 state: ONLINE
  scan: none requested
config:

        NAME                                    STATE     READ WRITE CKSUM
        data_pool                               ONLINE       0     0     0
          raidz1-0                              ONLINE       0     0     0
            scsi-SATA_ST4000DM000-1F2_Z3006G8C  ONLINE       0     0     0
            scsi-SATA_ST4000DM000-1F2_Z3006LQ8  ONLINE       0     0     0
            scsi-SATA_ST4000DM000-1F2_Z3006LWV  ONLINE       0     0     0
            scsi-SATA_ST4000DM000-1F2_Z3006MDX  ONLINE       0     0     0
            scsi-SATA_ST4000DM000-1F2_Z3006R54  ONLINE       0     0     0

errors: No known data errors

  pool: root_pool
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Mon May  6 11:07:30 2013
config:

        NAME                                           STATE     READ WRITE 
CKSUM
        root_pool                                      ONLINE       0     0 
0
          scsi-SATA_SanDisk_SDSSDP0130802400920-part3  ONLINE       0     0 
0

errors: No known data errors


And just to make sure, it really *is* mounting (cutting out the non-zfs 
related mounts for brevity):

root at kubuntu:/home/vince# mount
root_pool/ROOT/ubuntu-1 on / type zfs (rw,relatime,xattr)
root_pool on /root_pool type zfs (rw,noatime,xattr)
/dev/sdf1 on /boot/grub type ext3 
(rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered)
root_pool/ROOT on /root_pool/ROOT type zfs (rw,noatime,xattr)
data_pool on /mythtv2 type zfs (rw,relatime,xattr)


But on next reboot we're back to non-auto-mounting...

root at kubuntu:/home/vince# zpool status
  pool: data_pool
 state: UNAVAIL

(rest snipped; it's the same as before.)

Though interestingly, the destroy/import gives a slightly different message 
sometimes:

root at kubuntu:/home/vince# zpool destroy data_pool
root at kubuntu:/home/vince# zpool import data_pool
cannot import 'data_pool': pool may be in use from other system use '-f' to 
import anyway

root at kubuntu:/home/vince# zpool import data_pool -f
root at kubuntu:/home/vince# zpool status
  pool: data_pool
 state: ONLINE
  scan: none requested


I've tried scrubbing and then exporting data_pool, but that has no effect; 
it still will not automount, and a destroy/mount cycle brings it back online 
after a reboot.

root at kubuntu:/home/vince# cat /mythtv2/testing.txt
test file to see if old data was retained.  if you're reading this now, it 
was!


I checked dmesg, and there is nothing even remotely interesting going on 
there...  worst error message is the nvidia driver complaining about not 
having VGA configured for the console or some such.  Everything else is 
"thumbs up, system A-OK!" type messages.


Note that data_pool doesn't contain any data I care about (I'm just testing 
right now), so I have no issues with destroying data_pool and re-creating 
it.  I've done this several times, even doing so far as to blank the first 
~300MB of each disk in the pool via "dd if=/dev/zero of=/dev/sda" (..sdb 
...sdc ...etc.) after destroying data_pool, rebooting, then re-creating 
data_pool.  Nothing seems to help, I keep winding back in this same spot 
with a non-automounting data_pool.


What am I doing wrong here?  I feel like I'm missing something simple, but 
my google-fu hasn't turned up anything.  Thanks in advance!
 




More information about the zfs-discuss mailing list