[zfs-discuss] Re: Pool got partially duplicated - how to remove dup ?

Igor Hjelmstrom Vinhas Ribeiro igorhvr at iasylum.net
Tue May 8 20:11:01 EDT 2012


Based on this, I would try to export all zfs filesystems, remove the zfs
module, rm   /etc/zfs/zpool.cache, reboot, and try to import again.
zpool.cache AFAIK is the only place where any such information is stored.

On Tue, May 8, 2012 at 4:16 PM, DC <halfwalker at gmail.com> wrote:

> Looks like it's only in /etc/zfs
>
> root at bondi:~# l /etc/zfs
> total 12
> drwxr-xr-x   2 root root 4096 2012-05-08 13:57 .
> drwxr-xr-x 103 root root 4096 2012-05-06 11:47 ..
> -rw-r--r--   1 root root  183 2011-10-11 17:45 zdev.conf
>
> zpool import -f -d /dev/disk/by-id 14829601920544515899
>
> root at bondi:~# l /etc/zfs
> total 16
> drwxr-xr-x   2 root root 4096 2012-05-08 15:14 .
> drwxr-xr-x 103 root root 4096 2012-05-06 11:47 ..
> -rw-r--r--   1 root root  183 2011-10-11 17:45 zdev.conf
> -rw-r--r--   1 root root 2772 2012-05-08 15:14 zpool.cache
>
> Can't find it anywhere else - nothing named *.cache that is.
>
> What happens when a pool of disks is introduced to a completely new
> clean system ?  ZFS has to read the details from the drives themselves
> right ?  Could there be corrupted data there - they think they're part
> of two pools ?
>
> On May 8, 2:25 pm, Igor Hjelmstrom Vinhas Ribeiro
> <igor... at iasylum.net> wrote:
> > Maybe the file name is different?
> >
> > As a last resort, try to export again and then do a
> >
> > strace -e trace=file zpool import
> >
> > (you might need to apt-get install strace or something similar prior to
> > that)
> >
> > The strace output will show every filesystem operation, with luck you
> > should be able to find whatever your zpool is using as a cache (where
> your
> > ghost pool lies) by doing this. If you are not so lucky it might be that
> > the cache is accessed from somewhere else (if you where running zfs-fuse
> > you would need to strace the zfs-fuse process instead of the zpool)...
> >
> > igorhvr
> >
> >
> >
> >
> >
> >
> >
> > On Tue, May 8, 2012 at 3:08 PM, DC <halfwal... at gmail.com> wrote:
> > > No luck.  zpool.cache doesn't exist anywhere else ...  This time I
> > > rmmod'd zfs and reloaded it, and I still see the ghost.
> >
> > > root at bondi:/# find / -name zpool.cache
> > > root at bondi:/# rmmod zfs
> > > root at bondi:/# modprobe zfs
> > > root at bondi:/# zpool import
> > >  pool: stuff
> > >     id: 14829601920544515899
> > >  state: ONLINE
> > > action: The pool can be imported using its name or numeric identifier.
> > > config:
> >
> > >        stuff       ONLINE
> > >          raidz1-0  ONLINE
> > >             sda     ONLINE
> > >            sdc     ONLINE
> > >            sdb     ONLINE
> > >            sdf     ONLINE
> > >            sde     ONLINE
> > >            sdd     ONLINE
> >
> > >  pool: stuff
> > >    id: 10724103071123258924
> > >  state: UNAVAIL
> > > status: One or more devices contains corrupted data.
> > > action: The pool cannot be imported due to damaged devices or data.
> > >   see:http://zfsonlinux.org/msg/ZFS-8000-5E
> > > config:
> >
> > >        stuff       UNAVAIL  insufficient replicas
> > >          raidz1-0  UNAVAIL  insufficient replicas
> > >            sdh     UNAVAIL
> > >             sdc     FAULTED  corrupted data
> > >            sdd     FAULTED  corrupted data
> >
> > > Hrm - as you see in the post above, the stuff pool was loaded using /
> > > dev/disk/by-id names.  I exported it, did the search for zpool.cache,
> > > rmmod/modprobe zfs, and zfs import still shows /dev/sdx names.
> >
> > > Next test (if someone tells me it's safe :) ) is to import the good
> > > pool definition to a temp name, then export it.  A zpool import should
> > > then show the ghost stuff pool and the newly named pool ?  Then, zpool
> > > destroy stuff, although I worry that will hit the 3 drives in some
> > > way.  Finally, import the temp name back to name it stuff.
> >
> > > Scary.
> >
> > > D.
> >
> > > On May 8, 12:26 pm, Igor Hjelmstrom Vinhas Ribeiro
> > > <igor... at iasylum.net> wrote:
> > > > Your zpool.cache is probably stored somewhere else, but it surely
> does
> > > > exist somewhere - this is where your ghost pool information lies
> > > currently.
> >
> > > > Search the disk for this file, and erase it following the procedure
> > > > described previously, and you will get rid of the ghost pool.
> >
> > > > On Tue, May 8, 2012 at 12:24 PM, DC <halfwal... at gmail.com> wrote:
> > > > > OK, that works.  Exported the stuff pool, zpool.cache didn't exist
> > > > > when that was done.  The pool is now using the correct by-id names.
> > > > > But I still have the ghost pool there, same name.
> >
> > > > > root at bondi:~# zpool status
> > > > >  pool: stuff
> > > > >  state: ONLINE
> > > > >  scan: scrub repaired 0 in 45h50m with 0 errors on Fri May  4
> 19:53:30
> > > > > 2012
> > > > > config:
> >
> > > > >        NAME                                            STATE
> READ
> > > > > WRITE CKSUM
> > > > >        stuff                                           ONLINE
> > > > > 0     0     0
> > > > >          raidz1-0                                      ONLINE
> > > > > 0     0     0
> > > > >            ata-WDC_WD20EARS-00J2GB0_WD-WCAYY0289805    ONLINE
> > > > > 0     0     0
> > > > >            ata-WDC_WD20EARS-00J2GB0_WD-WCAYY0289676    ONLINE
> > > > > 0     0     0
> > > > >            ata-WDC_WD20EARS-00MVWB0_WD-WMAZA0101673    ONLINE
> > > > > 0     0     0
> > > > >            ata-Hitachi_HDS722020ALA330_JK1131YAG9W9AV  ONLINE
> > > > > 0     0     0
> > > > >            ata-Hitachi_HDS722020ALA330_JK1131YAG9PLTV  ONLINE
> > > > > 0     0     0
> > > > >            ata-WDC_WD20EARS-00MVWB0_WD-WCAZA1256093    ONLINE
> > > > > 0     0     0
> >
> > > > > errors: No known data errors
> > > > > root at bondi:~# zpool import
> > > > >   pool: stuff
> > > > >    id: 10724103071123258924
> > > > >  state: UNAVAIL
> > > > > status: One or more devices contains corrupted data.
> > > > > action: The pool cannot be imported due to damaged devices or data.
> > > > >   see:http://zfsonlinux.org/msg/ZFS-8000-5E
> > > > > config:
> >
> > > > >        stuff       UNAVAIL  insufficient replicas
> > > > >          raidz1-0  UNAVAIL  insufficient replicas
> > > > >             sdh     UNAVAIL
> > > > >            sdc     UNAVAIL
> > > > >            sdd     UNAVAIL
> >
> > > > > How do I get rid of that ghost stuff pool ?  I'm guessing to
> reimport
> > > > > with a different name as Fajar said, then zpool destroy stuff, but
> I
> > > > > worry about that whacking the 3 devices associated ...
> >
> > > > > Thanks for the quick help
> >
> > > > > D.
> >
> > > > > On May 7, 10:39 pm, Darik Horn <dajh... at vanadac.com> wrote:
> > > > > > On Mon, May 7, 2012 at 9:26 PM, DC <halfwal... at gmail.com> wrote:
> >
> > > > > > > 1) How can I delete/remove the false pool ?
> > > > > > > 2) Is it possible to have it use the /dev/disk/by-id names
> again ?
> >
> > > > > > Do this:
> >
> > > > > > 1. Export all pools.
> > > > > > 2. # rm /etc/zfs/zpool.cache
> > > > > > 3. # zpool import -d /dev/disk/by-id 14829601920544515899
> >
> > > > > > The `-d` switch is nearly mandatory for ZoL.  Always use the `-d`
> > > switch.
> >
> > > > > > If bare `/dev/sd*` nodes are visible in the vdev list on a ZoL
> > > system,
> > > > > > then reimport the pool using the `-d` switch.
> >
> > > > > > --
> > > > > > Darik Horn <dajh... at vanadac.com>
>



-- 
igorhvr
cel: +55 19 8801 1458
home: +55 19 2121 8558
email: igorhvr at iasylum.net
skype: igor.hvr
gtalk: igor.ribeiro at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20120508/bdd0357e/attachment.html>


More information about the zfs-discuss mailing list