[zfs-discuss] Unavailable pool after updating from 0.7.3 to 0.7.4

Andrew Carlson naclosagc at gmail.com
Sat Dec 16 21:21:24 EST 2017


Make sure that /dev/disk/by-id is populated.  I have had upgrades that
changed the mappings in /dev/disk/by-id.  You can try:

zpool import -d /dev/disk/by-id zpool1

or use one of the other /dev/disk directories, like /dev/disk/by-uuid

On Sat, Dec 16, 2017 at 6:47 PM, Miguel Medalha via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> I updated zfs 0.7.3 to 0.7.4 on a CentOS 6.9 server. Just did a "yum zfs
> update" and rebooted. My zfs pool didn't mount and it's now unavailable.
> zpool reports:
>
> cannot import 'zpool1': no such pool available
>
> I tried to import it using the /etc/zfs/zpool.cache file. "/zpool import
> -c /etc/zfs/zpool.cache -Fn" gives the following:
>
>    pool: zpool1
>      id: 556390836401730730
>   state: UNAVAIL
>  status: One or more devices contains corrupted data.
>  action: The pool cannot be imported due to damaged devices or data.
>    see: http://zfsonlinux.org/msg/ZFS-8000-5E
>  config:
>
>         zpool1                      UNAVAIL  insufficient replicas
>           mirror-0                  UNAVAIL  insufficient replicas
>             wwn-0x50014ee25a7c6a19  UNAVAIL
>             wwn-0x50014ee2aff517bc  UNAVAIL
>
> "zdb -U /etc/zfs/zpool.cache" outputs the following:
>
> zpool1:
>     version: 5000
>     name: 'zpool1'
>     state: 0
>     txg: 4
>     pool_guid: 556390836401730730
>     errata: 0
>     hostname: 'pombal'
>     com.delphix:has_per_vdev_zaps
>     vdev_children: 1
>     vdev_tree:
>         type: 'root'
>         id: 0
>         guid: 556390836401730730
>         create_txg: 4
>         children[0]:
>             type: 'mirror'
>             id: 0
>             guid: 4706384182298630342
>             metaslab_array: 256
>             metaslab_shift: 31
>             ashift: 12
>             asize: 250045005824
>             is_log: 0
>             create_txg: 4
>             com.delphix:vdev_zap_top: 129
>             children[0]:
>                 type: 'disk'
>                 id: 0
>                 guid: 16730838872300733673
>                 path: '/dev/disk/by-id/wwn-0x50014ee25a7c6a19-part1'
>                 devid: 'ata-WDC_WD2500BEKT-66F3T2_WD-WX81AA017812-part1'
>                 phys_path: 'pci-0000:00:1f.2-scsi-1:0:0:0'
>                 whole_disk: 1
>                 create_txg: 4
>                 com.delphix:vdev_zap_leaf: 130
>             children[1]:
>                 type: 'disk'
>                 id: 1
>                 guid: 5440135513295625149
>                 path: '/dev/disk/by-id/wwn-0x50014ee2aff517bc-part1'
>                 devid: 'ata-WDC_WD2500BEKT-00A25T0_WD-WXE1AB0H9466-part1'
>                 phys_path: 'pci-0000:00:1f.2-scsi-0:0:0:0'
>                 whole_disk: 1
>                 create_txg: 4
>                 com.delphix:vdev_zap_leaf: 131
>     features_for_read:
>         com.delphix:hole_birth
>         com.delphix:embedded_data
>
> I used the data on the pool just prior to the update and zpool then saw
> the pool as healthy.
> I am stuck now. I have about 150GB of data on that pool. Not all data has
> a backup. Any ideas?
>
> Thank you!
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>



-- 
Andy Carlson
---------------------------------------------------------------------------
Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License: $8.95/month,
The feeling of seeing the red box with the item you want in it:Priceless.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20171216/baa5d8f4/attachment.html>


More information about the zfs-discuss mailing list