[zfs-discuss] recommendations for a 50 TB zpool?

Richard Elling richard.elling at richardelling.com
Sun Nov 4 11:48:39 EST 2018

> On Nov 4, 2018, at 12:13 AM, Gordan Bobic via zfs-discuss <zfs-discuss at list.zfsonlinux.org> wrote:
> IMO, disks much over 4TB are bad news. I suggest you stick with 4TB disks. Avoid shingled or helium filled disks, or those under 7200rpm. Consult last few years worth of Backblaze statistics when choosing the disk brand and model to get.
> I concur with you that RAIDZ2 is a reasonable choice for reliability, but bear in mind that while this will be a reasonable choice for sequential workloads, performance will be very poor for random read workloads (with 7200 rpm disks, you will get 120 IOPS per vdev).
> 8-12 disks (or 16-24 with 4TB disks) is IMO too big for a single vdev. It will affect both performance and reliability. 4x(4+2) would probably be better if you can afford it. If not, I guess 2x(8+2) configuration wouldn't be too bad with 4TB drives.

We routinely make pools with 28 to 52 devices per pool. When you start thinking about pools with more than 100 devices, you should consider more pools. The “disks per vdev” rules-of-thumb for raidz are not likely to apply to draid, but in general more than 20 devices in a raidz vdev should be avoided.

> Note that random write I/O performance will be 120 IOPS per vdev (480 in first case, 240 in the other). It will be better for reads at approximately 720 and 300 respectively. Do some testing and maths to see whether that will suffice for your workload.

You’ve got this backwards: raidz does well on random writes. It is the small, random read performance that suffers. This performance is less of a problem for small volblocksize/recordsize with 4k physical blocks, but in that case space is less efficient.

  -- richard

>> On Sat, 3 Nov 2018, 23:34 Ulli Horlacher via zfs-discuss <zfs-discuss at list.zfsonlinux.org wrote:
>> I have to set up a new F*EX (*) server with about 50 TB.
>> I will get 8 8-12 TB SAS or SATA disks.
>> The throughput will be about 1 TB/d (accumulated read+write).
>> Reliabilty is more important than performance.
>> Rebuilding of a defect and replaced 12 TB disk will take many hours and in
>> this time a second disk failure will be catastrophic.
>> Therefore I thought of using RAIDZ2.
>> So far I have used ZFS only with 2-disk-RAID1.
>> Any recommendations for my use case?
>> The OS will be CentOS (decision set by management, no discussion possible).
>> (*) https://fex.rus.uni-stuttgart.de/
>>     https://fex.rus.uni-stuttgart.de/features.html
>> -- 
>> Ullrich Horlacher              Server und Virtualisierung
>> Rechenzentrum TIK         
>> Universitaet Stuttgart         E-Mail: horlacher at tik.uni-stuttgart.de
>> Allmandring 30a                Tel:    ++49-711-68565868
>> 70569 Stuttgart (Germany)      WWW:    http://www.tik.uni-stuttgart.de/
>> REF:<20181103233358.GA15759 at rus.uni-stuttgart.de>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss at list.zfsonlinux.org
>> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20181104/a88cbe6b/attachment-0001.html>

More information about the zfs-discuss mailing list