[zfs-discuss] omg - export/import failed, helpless

Durval Menezes durval.menezes at gmail.com
Fri Aug 9 06:11:05 EDT 2013


Hi Niels,

On Aug 9, 2013 6:09 AM, "Niels de Carpentier" <zfs at decarpentier.com> wrote:
> Have you tried importing with zpool import -X?
> This is an undocumented option that will try to roll back further then -F.

Nope, as I haven't heard of it. Will try as soon as I get to the office and
let you know.

Do you recommend trying with another ZFS implementation first, or is ZoL
0.6.1 OK?

Cheers,
-- 
   Durval.

>
> Niels
>
> > Hello Folks,
> >
> > In brief, all attempts to re-import the pool in every OS I've found
> > (Sabayon 10, OpenIndiana, FreeBSD, SmartOS, and the original Springdale
> > EL6.4 with Zol 0.6.1 where it was created) fails.
> >
> > I've started playing with zdb, and "zdb -l /dev/sda1" shows what seems
to
> > be very consistent data:
> >
> >  --------------------------------------------
> > LABEL 0
> > --------------------------------------------
> >     version: 26
> >     name: 'pool01'
> >     state: 1
> >     txg: 13376
> >     pool_guid: 14948663032033287073
> >     hostid: 9291862
> >     hostname: 'openindiana'
> >     top_guid: 17443046819930691828
> >     guid: 17443046819930691828
> >     vdev_children: 1
> >     vdev_tree:
> >         type: 'disk'
> >         id: 0
> >         guid: 17443046819930691828
> >         path: '/dev/dsk/c5t1d0p0'
> >         devid: 'id1,sd at SATA_____VBOX_HARDDISK____VB264f2e74-79b531f5/q'
> >         phys_path: '/pci at 0,0/pci8086,2829 at 1f,2/disk at 1,0:q'
> >         whole_disk: 0
> >         metaslab_array: 31
> >         metaslab_shift: 32
> >         ashift: 12
> >         asize: 500102070272
> >         is_log: 0
> >         create_txg: 4
> >     features_for_read:
> >
> >
> > Indeed, the last time the pool was imported successfully I was running
the
> > OpenIndiana LiveCD (which sets the hostname to "openindiana"), the pool
> > was
> > indeed created with ashift=12 and version=26, its name ("pool01") is
> > correct too, the asize parameter makes sense for a 500GB partition, as a
> > partition whole_disk being 0 makes sense too, the path for the device
> > under
> > OpenIndiana is also correct, etc. So, whatever the problem is, it
doesn't
> > seem to be a corrupted vdev label.
> >
> > So, is there anyway to find what precisely is "corrupted"? for example,
by
> > specifying some "-v"-like option to zpool import (I've checked and it
does
> > not accept that option, and the manpage doesn't show anything similar).
> >
> > Cheers,
> > --
> >    Durval.
> >
> > On Thu, Aug 8, 2013 at 3:28 PM, Durval Menezes
> > <durval.menezes at gmail.com>wrote:
> >
> >>
> >>
> >> On Thu, Aug 8, 2013 at 3:01 PM, Durval Menezes
> >> <durval.menezes at gmail.com>wrote:
> >>
> >>> Hello Hajo,
> >>>
> >>> On Thu, Aug 8, 2013 at 2:53 PM, Hajo Möller <dasjoe at gmail.com> wrote:
> >>>
> >>>> On 08.08.2013 19:46, Durval Menezes wrote:
> >>>> > So, is my pool definitely lost? Should I just give up and recover
> >>>> from
> >>>> > backup?
> >>>>
> >>>> I resolved a similar situation by importing the pool in a recent
> >>>> SmartOS
> >>>> build and exporting it after a few minutes: http://smartos.org/
> >>>
> >>>
> >>> Thanks for the pointer. I'm downloading their latest CD image
> >>> (smartos-20130725T202435Z.iso) right now and will try it immediately.
> >>>
> >>
> >> No luck with SmarOS either: booted their LiveCD but could not login
(the
> >> password in SINGLE_USER_ROOT_PASSWORD.txt is not accepted).
> >>
> >> Any one has any other ideas?
> >>
> >> Cheers,
> >> --
> >>    Durval.
> >>
> >>
> >>
> >>>
> >>> BTW, just downloaded a FreeBSD 9.1 LiveCD called "zfsguru" and tried
> >>> importing (same procedure: "zpool import" first, then "zpool import
> >>> -F",
> >>> then "zpool import -F -f", then created a dir with a symlink to the
> >>> device,
> >>> then tried "zpool import -d dir" then finally "zpool import -F -d
dir",
> >>> all
> >>> to no avail...
> >>>
> >>> Cheers,
> >>> --
> >>>    Durval.
> >>>
> >>>
> >>>
> >>>>
> >>>> --
> >>>> Cheers,
> >>>> Hajo Möller
> >>>>
> >>>
> >>>
> >>
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20130809/b8725bcb/attachment.html>


More information about the zfs-discuss mailing list