[zfs-discuss] dysmetric vdev

Cyril Plisko cyril.plisko at infinidat.com
Thu Oct 3 00:32:47 EDT 2013


Cédric,

Can it be that in the past first vdev experienced offline or otherwise
unavailable disk (slot0..slot6) ?
If so - ZFS avoids distributing writes to vdev in DEGRADED state. If such
state persist for the prolongated period of time the other vdev can
accumulate visibly more data.


On Thu, Oct 3, 2013 at 1:20 AM, Cédric Lemarchand <
cedric.lemarchand at ixblue.com> wrote:

> Hello list,
>
> I don't understand why ZFS didn't spread data equally within vdev of the
> same pool for this box :
>
> root at renoir-ztank1:~# zpool list -v
> NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> ztank    38T  22.1T  15.9T    58%  1.00x  ONLINE  -
>   raidz2    19T  9.45T  9.55T         -
>     slot0      -      -      -         -
>     slot1      -      -      -         -
>     slot2      -      -      -         -
>     slot3      -      -      -         -
>     slot4      -      -      -         -
>     slot5      -      -      -         -
>     slot6      -      -      -         -
>   raidz2    19T  12.7T  6.31T         -
>     slot7      -      -      -         -
>     slot8      -      -      -         -
>     slot9      -      -      -         -
>     slot10      -      -      -         -
>     slot11      -      -      -         -
>     slot12      -      -      -         -
>     slot13      -      -      -         -
> cache      -      -      -      -      -      -
>   scsi-SATA_Samsung_SSD_**840S1ATNSAD604969F-part3   220G  1.08G 219G
>     -
>   scsi-SATA_Samsung_SSD_**840S1ATNSAD604976V-part3   220G  1.07G 219G
>     -
>
> But on the other one it does :
>
> root at europe-ztank1:/dev/disk/**by-id# zpool list -v
> NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> ztank    38T  28.1T  9.90T    73%  1.00x  ONLINE  -
>   raidz2    19T  14.0T  4.95T         -
>     slot0      -      -      -         -
>     slot1      -      -      -         -
>     slot2      -      -      -         -
>     slot3      -      -      -         -
>     slot4      -      -      -         -
>     slot5      -      -      -         -
>     slot6      -      -      -         -
>   raidz2    19T  14.0T  4.95T         -
>     slot7      -      -      -         -
>     slot8      -      -      -         -
>     slot9      -      -      -         -
>     slot10      -      -      -         -
>     slot11      -      -      -         -
>     slot12      -      -      -         -
>     slot13      -      -      -         -
> cache      -      -      -      -      -      -
>   scsi-SATA_Samsung_SSD_**840S1ATNEAD518245D-part3   220G  10.6G 209G
>     -
>   scsi-SATA_Samsung_SSD_**840S1ATNEAD504915J-part3   220G  10.8G 209G
>     -
>
> Of course pools has been created with both vdev at the very beginning, any
> ideas of how this can happen ?
>
> To unsubscribe from this group and stop receiving emails from it, send an
> email to zfs-discuss+unsubscribe@**zfsonlinux.org<zfs-discuss%2Bunsubscribe at zfsonlinux.org>
> .
>

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20131003/e7c92dc7/attachment.html>


More information about the zfs-discuss mailing list