[zfs-discuss] omg - export/import failed, helpless
durval.menezes at gmail.com
Thu Aug 8 13:21:28 EDT 2013
On Thu, Aug 8, 2013 at 2:08 PM, Andrew Hamilton <ahamilto at tjhsst.edu> wrote:
> On 08/08/13 13:00, Durval Menezes wrote:
> Hello folks,
> During my testing here, I exported one of my pools on my (physical)
> machine, then attached it to a VM, booted the VM with Sabayon Spinbase 10
> Live DVD, then imported the pool with no issues. Then I exported the pool
> and powered off that VM, bringing it back on with OpenIndiana from the
> 151a7 Live CD; the first import worked OK, but I then powercycled the VM
> (*without* exporting the pool first) and rebooted OpenIndiana, and this
> time zpool import returned the following error:
> pool: pool01
> id: 14948663032033287073
> state: FAULTED
> status: The pool metadata is corrupted.
> action: The pool cannot be imported due to damaged devices or data.
> see: http://illumos.org/msg/ZFS-8000-72
> pool01 FAULTED corrupted data
> c5t1d0p0 ONLINE
> I saw this error last week; the problem turned out to be Linux
> multipathd was starting up on the server and creating device-mapper entries
> for the zpool drives which then marked the original drive as in-use and
> blocked ZFS.
> If you have multipath installed, try running multipath -f <wwid> for each
> of your ZFS drives; alternatively multipath -F will clear out all
> multipathed drives (if you don't care about them at all).
Nope, no multipath installed here:
# whereis multipath
> Tried rebooting with Sabayon 10, running zpool with -F option,
> assembling a directory and then specifying it with the -d option; then
> powered off the VM and re-tried everything with ZoL 0.6.1 again on host
> but nothing has worked so far.
> That the zpool imports on a different OS suggests that it is intact
zpool imported in two different OSes (Sabayon and OpenIndiana) at first,
but after my power-reciclyng the OpenIndiana VM without exporting the pool
first, it refuses to import *anywhere*.
> and that something is blocking ZFS getting exclusive access to the drives.
Humrmrmr... that other something would have to be listed by lsof, right?
Nothing is right now:
# lsof -n /dev/sda1
So, I don't thing that's the case...
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the zfs-discuss