Size concerns on drive replacement

briaeros007 briaeros007 at gmail.com
Sun Jun 5 04:14:03 EDT 2011


2011/6/4 Ron Knapp <ron.siesta at gmail.com>:
> root at sandbox:~# zpool status
>  pool: crypt
>  state: UNAVAIL
> status: One or more devices could not be used because the label is
> missing
>        or invalid.  There are insufficient replicas for the pool to continue
>        functioning.
> action: Destroy and re-create the pool from
>        a backup source.
>   see: http://www.sun.com/msg/ZFS-8000-5E
>  scan: none requested
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>        crypt       UNAVAIL      0     0     0  insufficient replicas
>          mirror-0  UNAVAIL      0     0     0  insufficient replicas
>            sda     UNAVAIL      0     0     0  corrupted data
>            sdg     UNAVAIL      0     0     0
>          mirror-1  ONLINE       0     0     0
>            sdc     ONLINE       0     0     0
>            sdd     ONLINE       0     0     0
>
> Well my change did not go well at all, The exchange went fine, I had
> the new drive on an esata port on position /dev/sdg. When I pulled
> the bad drive out and installed the new drive, I somehow got the
> cables
> switched and when I booted up I had a corrupt pool with 2 drives
> offline
> saying I need to destroy it and rebuild from backup. I figured out
> that
> I had 2 cables switched so I got that straight and it the pool came
> back
> and appears to be ok. So how do I avoid this?, When i originally
> created
> the pool I used the by-id/ reference, but after the first reboot they
> changed in the pool status to /dev/sdx. Is the any sure way to build
> it
> so cable position is not an issue?
>
> Ok on to what I have now.
>
>
>
> root at sandbox:~# zpool status
>  pool: crypt
>  state: DEGRADED
> status: One or more devices could not be used because the label is
> missing or
>        invalid.  Sufficient replicas exist for the pool to continue
>        functioning in a degraded state.
> action: Replace the device using 'zpool replace'.
>   see: http://www.sun.com/msg/ZFS-8000-4J
>  scan: resilvered 348G in 1h26m with 0 errors on Sat Jun  4 12:56:46
> 2011
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>        crypt       DEGRADED     0     0     0
>          mirror-0  DEGRADED     0     0     0
>            sda     ONLINE       0     0     0
>            sdg     UNAVAIL      0     0     0
>          mirror-1  ONLINE       0     0     0
>            sdc     ONLINE       0     0     0
>            sdd     ONLINE       0     0     0
>
>
>
> Now I have the old drive g in the b slot, how do I remedy this
> situation and get the g removed and the b put in? I looks like I
> need to destroy the data on b, the old g and then do the replace
> and let it rebuild again.
>
> zpool replace crypt /dev/sdg /dev/sdb
> invalid vdev specification
> use '-f' to override the following errors:
> /dev/sdb1 is part of active pool 'crypt'
> root at sandbox:~# zpool replace -f crypt /dev/sdg /dev/sdb
> invalid vdev specification
> the following errors must be manually repaired:
> /dev/sdb1 is part of active pool 'crypt'
>

Hello,

Can you try to inverse the devices in you attach ?
zpool replace -f crypt /dev/sdb /dev/sdg
-but why zpool speak about /dev/sdb1 and not sdb ? -

If it doesn't work,
Could you tried to let the "bad disk" in place, and to add sdg in a
new slot (not the same that your old disk)
Then you add sdg to your pool as a spare.
zppol add crypt spare /dev/sdg
Once it's done, you try to do a
zpool online crypt /dev/sdg
zpool detach crypt /dev/sdb1
?

Cordially



More information about the zfs-discuss mailing list