[zfs-discuss] Re: Pool got partially duplicated - how to remove dup ?

DC halfwalker at gmail.com
Wed May 9 17:32:05 EDT 2012


Yup, copying the data off the 250G is no problem - I'm doing that now
to another spare drive.  I just want to be sure that any zfs signature
can be removed.  When I installed it, I blew away existing partitions
(the drive had been used for other things since the zfs tests),
created a single 0x83 partition and built an ext4 filesystem on it.

Anyone know where that zfs signature/info written and how do I wipe
it ?

On May 9, 4:53 pm, Christ Schlacta <aarc... at aarcane.org> wrote:
> Copy the data (maybe tar, but not dd) to stuff/backup_of_data, then shred
> the drive (or just regular wipe), and restore from backup.
>
> If you backup with dd, you'll backup the problem as well.
> On May 9, 2012 1:39 PM, "DC" <halfwal... at gmail.com> wrote:
>
> > Hah - that's brilliant.  OK, found the problem but I'm not sure what
> > to do about it.
>
> > A while back I added an extra 250G drive just for scratch space.  It's
> > not part of any array, but the key factor is that IT USED TO BE, back
> > when I was testing zfs, both zfs-fuse and zfsonlinux.  And I used the
> > same "stuff" pool name ...  So clearly there is some information
> > written on the drive denoting what pool it belongs to etc.
>
> > I disconnected all drives except the boot drive.  No ghost pools.  I
> > connect up the scratch 250G, and lo, we see the ghost pool, thinking
> > it's part of "stuff" consisting of sdb, sdc, sdd.  Nothing I do will
> > get rid of it.  No amount of removing /etc/zfs/zpool.cache, rmmod/
> > modprobe etc. has any effect.  "zpool import stuff" claims it's
> > completely broken and should be destroyed and rebuild from backup.
> > "zpool destroy stuff" complains no such pool.
>
> > I do have data on the 250G.  Is there a way to clear off any zfs
> > signature from it without wiping it ?  If I have to I can copy the
> > data off ...
>
> > D.
>
> > On May 9, 1:29 pm, Christ Schlacta <aarc... at aarcane.org> wrote:
> > > You can always physically disconnect the disks
> > > On May 9, 2012 10:10 AM, "DC" <halfwal... at gmail.com> wrote:
>
> > > > I don't know - somehow I don't think so.  How can I find out?
>
> > > > With zfs loaded and the pool imported, I see /etc/zfs/zpool.cache.
> > > > When I export the pool, /etc/zfs/zpool.cache is no longer there.
>
> > > > I've gone through a complete cleanout ...
>
> > > > zfs export stuff
> > > > rmmod zfs
> > > > rm -f /etc/zfs/zpool.cache
> > > > modprobe zfs
> > > > zpool import -f -d /dev/disk/by-id 14829601920544515899
>
> > > > but then zpool import still shows the ghost pool.
>
> > > >  pool: stuff
> > > >    id: 10724103071123258924
> > > >  state: UNAVAIL
> > > > status: One or more devices contains corrupted data.
> > > > action: The pool cannot be imported due to damaged devices or data.
> > > >   see:http://zfsonlinux.org/msg/ZFS-8000-5E
> > > > config:
> > > >        stuff       UNAVAIL  insufficient replicas
> > > >          raidz1-0  UNAVAIL  insufficient replicas
> > > >            sdh     UNAVAIL
> > > >            sdc     FAULTED  corrupted data
> > > >            sdd     FAULTED  corrupted data
>
> > > > Can anyone comment on the safety of the process I outlined above ?
>
> > > > # rename good pool
> > > > zpool export stuff
> > > > zpool import -f -d /dev/disk/by-id 14829601920544515899 newstuff
>
> > > > # destroy ghost pool hopefully without whacking good newstuff pool
> > > > zpool destroy stuff
>
> > > > # rename good pool back
> > > > zpool export newstuff
> > > > zpool import -f -d /dev/disk/by-id 14829601920544515899 stuff
>
> > > > On May 8, 8:52 pm, Christ Schlacta <aarc... at aarcane.org> wrote:
> > > > > Any chance it's using an zpool.cache from the initrd ?
>
> > > > > On 5/8/2012 17:11, Igor Hjelmstrom Vinhas Ribeiro wrote:
>
> > > > > > Based on this, I would try to export all zfs filesystems, remove
> > the
> > > > > > zfs module, rm  /etc/zfs/zpool.cache, reboot, and try to import
> > again.
> > > > > > zpool.cache AFAIK is the only place where any such information is
> > > > stored.
>
> > > > > > On Tue, May 8, 2012 at 4:16 PM, DC <halfwal... at gmail.com
> > > > > > <mailto:halfwal... at gmail.com>> wrote:
>
> > > > > >     Looks like it's only in /etc/zfs
>
> > > > > >     root at bondi:~# l /etc/zfs
> > > > > >     total 12
> > > > > >     drwxr-xr-x   2 root root 4096 2012-05-08 13:57 .
> > > > > >     drwxr-xr-x 103 root root 4096 2012-05-06 11:47 ..
> > > > > >     -rw-r--r--   1 root root  183 2011-10-11 17:45 zdev.conf
>
> > > > > >     zpool import -f -d /dev/disk/by-id 14829601920544515899
>
> > > > > >     root at bondi:~# l /etc/zfs
> > > > > >     total 16
> > > > > >     drwxr-xr-x   2 root root 4096 2012-05-08 15:14 .
> > > > > >     drwxr-xr-x 103 root root 4096 2012-05-06 11:47 ..
> > > > > >     -rw-r--r--   1 root root  183 2011-10-11 17:45 zdev.conf
> > > > > >     -rw-r--r--   1 root root 2772 2012-05-08 15:14 zpool.cache
>
> > > > > >     Can't find it anywhere else - nothing named *.cache that is.
>
> > > > > >     What happens when a pool of disks is introduced to a
> > completely new
> > > > > >     clean system ?  ZFS has to read the details from the drives
> > > > themselves
> > > > > >     right ?  Could there be corrupted data there - they think
> > they're
> > > > part
> > > > > >     of two pools ?



More information about the zfs-discuss mailing list