[zfs-discuss] Import pool with missing devices

turbo at bayour.com turbo at bayour.com
Thu Oct 20 05:41:36 EDT 2011


 [This is more a general ZFS question, but since I'm running it on 
 Linux...]


 I'm learning how to administrate ZFS, and trying out a couple use-cases 
 before
 i comit my data on ZFS for real...

 I'm running a Debian GNU/Linux lenny (kernel 2.6.35.6) - which is 
 almost identical
 to my physical host I'll be running ZFS on later - in a VirtualBox 
 session.


 I've created and added 17 * 1.5TB disks (spare files :) in this 
 vmachine. I created
 a zpool (named 'share') with 5 * RAIDZ1 w/ 3 disks in each.

 To test disk failures, I exported the pool, shutdown the vmachine, 
 removed two disks
 (one in raidz1-0 and one in raidz1-4 (which corresponds to 'physical' 
 device number 2
 and 13), see below) and then started the vmachine again.


 But ZFS refuses to import the pool for me!

 ----- s n i p -----
 debianzfs:~# zpool import
   pool: share
     id: 8447518891555881437
  state: UNAVAIL
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
    see: http://www.sun.com/msg/ZFS-8000-5E
 config:

         share       UNAVAIL  insufficient replicas
           raidz1-0  UNAVAIL  insufficient replicas
             sdb     FAULTED  corrupted data
             sdb     UNAVAIL
             sdc     ONLINE
           raidz1-1  ONLINE
             sdd     ONLINE
             sde     ONLINE
             sdf     ONLINE
           raidz1-2  ONLINE
             sdg     ONLINE
             sdh     ONLINE
             sdi     ONLINE
           raidz1-3  ONLINE
             sdj     ONLINE
             sdk     ONLINE
             sdl     ONLINE
           raidz1-4  UNAVAIL  insufficient replicas
             sdn     FAULTED  corrupted data
             sdm     ONLINE
             sdn     UNAVAIL
 ----- s n i p -----


 Obvious, really since I removed 'sdb' and 'sdn' (which now exists, but 
 was
 really 'sdc' and 'sdo' before).

 Ok, tried using the by-id path:

 ----- s n i p -----
 debianzfs:~# zpool import -d /dev/disk/by-id
   pool: share
     id: 8447518891555881437
  state: UNAVAIL
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
    see: http://www.sun.com/msg/ZFS-8000-5E
 config:

         share                                      UNAVAIL  
 insufficient replicas
           raidz1-0                                 UNAVAIL  
 insufficient replicas
             sdb                                    FAULTED  corrupted 
 data
             ata-VBOX_HARDDISK_VBcfd20cfe-1cf5f680  UNAVAIL
             ata-VBOX_HARDDISK_VB0a0290c1-595c3b0b  ONLINE
           raidz1-1                                 ONLINE
             ata-VBOX_HARDDISK_VB89d5e3e0-ff3c1d97  ONLINE
             ata-VBOX_HARDDISK_VB4eab7181-d815b423  ONLINE
             ata-VBOX_HARDDISK_VB350418a7-868fdd49  ONLINE
           raidz1-2                                 ONLINE
             ata-VBOX_HARDDISK_VB65a5d88b-d8a5c60a  ONLINE
             ata-VBOX_HARDDISK_VB06f5b515-3d75e75c  ONLINE
             ata-VBOX_HARDDISK_VB676d72e4-84f322b0  ONLINE
           raidz1-3                                 ONLINE
             ata-VBOX_HARDDISK_VB05eeb659-6c29050f  ONLINE
             ata-VBOX_HARDDISK_VB7fe84e0b-72fb612e  ONLINE
             ata-VBOX_HARDDISK_VBee450898-e0375d19  ONLINE
           raidz1-4                                 UNAVAIL  
 insufficient replicas
             sdn                                    FAULTED  corrupted 
 data
             ata-VBOX_HARDDISK_VB2316600f-6cf03cb3  ONLINE
             ata-VBOX_HARDDISK_VB98798bc1-8f7ae7a3  UNAVAIL
 ----- s n i p -----

 Ok, better! It recognizes that 'sdb' and 'sdn' is removed... BUT,
 the disks it claims to be UNAVAIL, really isn't! It's just the
 wrong ones (wrong order?):


 ----- s n i p -----
 debianzfs:~# ll 
 /dev/disk/by-id/{ata-VBOX_HARDDISK_VBcfd20cfe-1cf5f680,ata-VBOX_HARDDISK_VB98798bc1-8f7ae7a3}
 lrwxrwxrwx 1 root root 9 2011-10-19 22:11 
 /dev/disk/by-id/ata-VBOX_HARDDISK_VB98798bc1-8f7ae7a3 -> ../../sdn
 lrwxrwxrwx 1 root root 9 2011-10-19 22:11 
 /dev/disk/by-id/ata-VBOX_HARDDISK_VBcfd20cfe-1cf5f680 -> ../../sdb
 ----- s n i p -----



 So the big question now is: How do I import this pool? The disks that 
 'failed'
 (it HAVE happened a couple of times for me that disks have been so 
 broken/krashed
 that the OS doesn't find them - and it might take me a while to replace 
 them),
 shouldn't matter - there IS enough disks to keep the data intact. 
 Granted, I'm screwed
 if any more disks fail in those two vdevs, but still...

 I guess I _could_ 'replace' (i.e., add two new virtual disks to the 
 vmachine)
 and then do a zfs replace, but the whole purpose with this 
 experiment/learning
 is: what if I can't (for whatever reason - time and money comes to mind 
 :).



 PS. First time I tried this, I did it without export/import, but that 
 went even
     worse - preferably I'd like to do all of this WITHOUT an 
 export/import procedure...





More information about the zfs-discuss mailing list