[zfs-discuss] looking through recent benchmarks online, and wtf?

Gordan Bobic gordan.bobic at gmail.com
Fri Apr 22 04:41:13 EDT 2016


FWIW, I found in my AWS tests, in optimal state of tune, ZFS with LZ4 beat
off MD RAID + ext4 in a like-for-like  configuration by a wide margin in
MySQL/InnoDB tests I carried out for a client last year. I honestly didn't
expect that result, but the numbers were quite conclusive.

On Fri, Apr 22, 2016 at 8:55 AM, Schlacta, Christ via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> I was looking up some benchmarks for ZFS online looking to justify
> using ZFS for an upcoming project, and I stumbled across this gem of a
> benchmark.  The methodology seems very solid, and everything seems to
> be in order...
>
> https://www.diva-portal.org/smash/get/diva2:822493/FULLTEXT01.pdf
>
> except that I've noticed one major small discrepency.  While ZFS
> absolutely *TROUNCES* everything else in the OLTP filebench model,
> which is a bunch of small reads and writes, it absolutely FAILS
> miserably later on using the similar small random read and write
> benchmarks in iozone..
>
> Can anybody identify why that might be?  Is the benchmark outdated?
> Might recent bug fixes have resolved the issues?  what's different
> between the two benchmarks to justify that discrepancy?
>
> My intended use case is lots of random IO, fairly mixed read and
> write, so I want to be sure and choose the correct backend (between
> zfs and btrfs, single disk operation only, depending on a higher layer
> for redundancy and repair in case of failure or fault) for the
> project, and that's the backend that will provide the higher IOPS for
> the work load.
>
> If it matters to anybody, the work load details are hosting a bunch of
> OSDs for ceph on open suse, then exporting those through lio to either
> the vhost target or the fibre channel target to enable VM mobility and
> fibre channel multipath.  If you've got the disks lying around and
> want to benchmark the two for me, I wouldn't complain, but I'd much
> rather get concrete answers to the questions above so I can understand
> what's going on here.
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20160422/c4da882b/attachment-0001.html>


More information about the zfs-discuss mailing list