[zfs-discuss] Slow read performance
alex.vodeyko at gmail.com
Mon Apr 2 06:05:34 EDT 2018
Yes, I'm now running zpool with two 12+3 raidz3 VDEVs. Read
performance is definitely better than zpool with three 8+2 raidz2
VDEVs (And still major read performance improvement comes from ashift
- "dd" ashift = 9: write = 2.3 GB/s, read = 1.8 GB/s
- "dd" ashift = 12: write = 2.8 GB/s, read = 1.2 GB/s
I decided to have 2x 30-drive zpools (because it seems not too much
performance difference between 30 and 60 drives in all my tested zpool
configs). I will use Lustre filesystem, so decided to have 2x Lustre
OSS servers with 30 drives each.
Still choosing the correct zpool layout - from the benchmarks it seems
raidz3 performance is only 100 MB/s worse.
But it would be great to get the advice on the best zpool layout
(raidz2 vs raidz3, 12+2, 13+2, 12+3) for 30 drives.
I'm now for two 12+3 raidz3 VDEVs with ashift=9.
2018-04-01 23:20 GMT+03:00 Andreas Dilger <adilger at dilger.ca>:
> On Mar 31, 2018, at 11:23 PM, Alex Vodeyko <alex.vodeyko at gmail.com> wrote:
>> To remind - all of the above came from zpool of six 8+2 raidz2 (60 drives total)
>> For comparison I've created one zpool with single 8+2 raidz2 (10
>> drives) and rerun tests on it, so:
> Have you tried a different geometry, like 5x 10+2 RAID-Z2 VDEVs? At one time there was a bug in the block allocation code that made 8+2 not work as well as 9+2 or 7+2. That _should_ have been fixed in the 0.7.x version you are running, but there might still be some problems.
> Cheers, Andreas
More information about the zfs-discuss