[zfs-discuss] Slow read performance

Gordan Bobic gordan.bobic at gmail.com
Mon Apr 2 08:37:31 EDT 2018


You do realize that measuring sequential I/O performance with dd will
produce readings that are in no meaningful way correlatable to any
multi-user workload you are likely to throw at it, especially something as
metadata intensive (small random I/O) as Lustre, right?

On Mon, 2 Apr 2018, 12:05 Alex Vodeyko via zfs-discuss, <
zfs-discuss at list.zfsonlinux.org> wrote:

> Yes, I'm now running zpool with two 12+3 raidz3 VDEVs. Read
> performance is definitely better than zpool with three 8+2 raidz2
> VDEVs (And still major read performance improvement comes from ashift
> = 9):
> - "dd" ashift = 9:  write = 2.3 GB/s, read = 1.8 GB/s
> - "dd" ashift = 12: write = 2.8 GB/s, read = 1.2 GB/s
> .
> I decided to have 2x 30-drive zpools (because it seems not too much
> performance difference between 30 and 60 drives in all my tested zpool
> configs). I will use Lustre filesystem, so decided to have 2x Lustre
> OSS servers with 30 drives each.
> Still choosing the correct zpool layout - from the benchmarks it seems
> raidz3 performance is only 100 MB/s worse.
> But it would be great to get the advice on the best zpool layout
> (raidz2 vs raidz3, 12+2, 13+2, 12+3) for 30 drives.
> I'm now for two 12+3 raidz3 VDEVs with ashift=9.
>
> Thanks,
> Alex
>
> 2018-04-01 23:20 GMT+03:00 Andreas Dilger <adilger at dilger.ca>:
> > On Mar 31, 2018, at 11:23 PM, Alex Vodeyko <alex.vodeyko at gmail.com>
> wrote:
> >> To remind - all of the above came from zpool of six 8+2 raidz2 (60
> drives total)
> >>
> >> For comparison I've created one zpool with single 8+2 raidz2 (10
> >> drives) and rerun tests on it, so:
> >
> > Have you tried a different geometry, like 5x 10+2 RAID-Z2 VDEVs?  At one
> time there was a bug in the block allocation code that made 8+2 not work as
> well as 9+2 or 7+2.  That _should_ have been fixed in the 0.7.x version you
> are running, but there might still be some problems.
> >
> > Cheers, Andreas
> >
> >
> >
> >
> >
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20180402/4931d064/attachment.html>


More information about the zfs-discuss mailing list