[zfs-discuss] cannot import 'home': I/O error Destroy and re-create the pool from a backup source

Daniel Armengod daniel at armengod.cat
Sat Apr 21 15:23:52 EDT 2018


*Wait for someone more knowledgeable to provide advice*

I was in a similar situation just yesterday. 4-drive RAIDz1, 1 drive
completely dead, the other one had acted up.

What I did (again: wait for someone else to provide input on this) was to:

* Disable ZFS automatic import on boot (in my case, systemctl disable
zfs-import.target). Actually, boot the system as stripped of
non-essential processes and services as you can.

* Check dmesg for error messages. Disks acting up will leave a very
recognizable pattern there.

* Make a full non-destructive read-only badblocks pass on each device.
This will tell you if they can withstand reads. If any disks are
not-yet-dead-but-dying the stress will leave logs in the kernel ring
buffer; check dmesg regularly. Identify how many flaky drives you have.
Pray you don't break the redundancy threshold.

* Check the ZFS data structures with zdb. With the pool unmounted, for
each member drive, run zdb -l /dev/<path_to_drive>. Make sure to provide
the partition/slice number, even if you gave ZFS the whole disks to
build the pool. Of particular note is the txg number: it should be the
same in all devices. I believe it should be the same in *at least* n-2
devices for a raidz2.

* Go back and read the zdb manual, it's quite interesting.

In my case, 2 of the remaining RAIDZ1 disks - the healthy ones - showed
a txg number higher than the faulty one.

zpool import tank -XF did the trick for me. I was lucky and able to
recover all the data (if anything was lost I've yet to notice, it mostly
contains anime :P) I thought I'd lose.

After successful recovery, re-import it with -o readonly=on and zfs
send'd all the datasets you care about somewhere safe and reliable. Then
you can do disk reshuffling until you can trust your pool again.

Best of luck,


On 2018-04-21 19:36, Anton Gubar'kov via zfs-discuss wrote:
> Hi, 
>
> My recent backup server freeze ended up with non-importable pool.
> Since it's a backup server - I have no further backups - so following
> the diag message is not a way for me. I would like to recover this
> pool as it contains some valuable data I cannot reproduce ever (video
> archive).
>
> So my status today is: 
>
> root at sysresccd /root % zpool import -N -f  home
> cannot import 'home': I/O error
>         Destroy and re-create the pool from
>         a backup source.
>
> root at sysresccd /root % zpool import -N
>    pool: home
>      id: 4810743847386909334
>   state: ONLINE
>  status: One or more devices contains corrupted data.
>  action: The pool can be imported using its name or numeric identifier.
>    see: http://zfsonlinux.org/msg/ZFS-8000-4J
>  config:
>
>         home                            ONLINE
>           raidz2-0                      ONLINE
>             wwn-0x5000c500a41a0a00      ONLINE
>             wwn-0x5000c500a41ae340      ONLINE
>             wwn-0x5000c500a41b4c57      ONLINE
>             wwn-0x5000c500a41b7572      ONLINE
>             wwn-0x5000c500a41ba99c      ONLINE
>             wwn-0x5000c500a41babe8      ONLINE
>         cache
>           sdj3
>         logs
>           wwn-0x30000d1700d9d40f-part2  ONLINE
>
> I tried import -F and import -FX too - no luck :-[
> I have reviewed all similar cases that google search returned me. I'm
> really confused as I've built the 6-drive raidz2 just for the
> resiliense and now I face availability issues.
>
> Can someone experienced provide an advice on recovery?
>
> My ZFS versions:
> v0.7.7-r0-gentoo
>
> This is also my root pool, so I can't boot my normal rig and booting
> the recovery environment
> using https://wiki.gentoo.org/wiki/User:Fearedbliss systemrescuecd
> based live system.
>
>
> Thanks in advance.
> Anton.
>
>
>
>
>
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20180421/9776520a/attachment.html>


More information about the zfs-discuss mailing list