[zfs-discuss] ZFS on RAID-5 array

Gregor Kopka (@zfs-discuss) zfs-discuss at kopka.net
Sun Apr 15 10:32:44 EDT 2018



Am 15.04.2018 um 15:17 schrieb Gordan Bobic via zfs-discuss:
> On Sun, Apr 15, 2018 at 2:01 PM, Edward Ned Harvey (zfsonlinux) via
> zfs-discuss <zfs-discuss at list.zfsonlinux.org
> <mailto:zfs-discuss at list.zfsonlinux.org>> wrote:
>
>     3x raidz1: read IOPS of 3 disks, write IOPS of 3 disks, read
>     throughput of 12 disks, write throughput of 3 disks, capacity of
>     12 disks, guaranteed survival of at least 1 failure, probable
>     survival of 2 concurrent failures, max survival 3 concurrent failures
>
>
> Read IOPS would be 3x 5/4 =15/4 = 3.75 disks
>
>     1x raidz2: read IOPS of 1 disk, write IOPS of 1 disk, read
>     throughput of 13 disks, write throughput of 13 disks, capacity of
>     13 disks, guaranteed survival of 2 failures, max survival 2
>     concurrent failures
>
>
> Read IOPS would be 15/13 disks.
> Reason being that reads don't touch the parity disk unless recovery
> due to checksum failure is needed. Since the parity is rotated, that
> means that disks you didn't request a read from have those IOPS
> available to serve different requests.
One should keep in mind that the read IOPS figure is only true /with
multiple clients -/ best case what a /single thread/ random reader (=
requesting the next random read after the last one completed) can
experience is the performance characteristics of one single (idle)
drive, regardless of how many top-level vdevs you have, unless caching
comes into play.

Gregor
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20180415/357a1f8f/attachment.html>


More information about the zfs-discuss mailing list