[zfs-discuss] ZFSOnLinux I/O Benchmarks

aurfalien aurfalien at gmail.com
Mon Oct 21 15:48:15 EDT 2013


On Oct 20, 2013, at 3:51 PM, Achim Gottinger wrote:

> Am 21.10.2013 00:24, schrieb Achim Gottinger:
>> Am 20.10.2013 19:41, schrieb Micky Martin:
>>> When setting up a test bed for ZFS, I was looking for benchmarks on what to expect in hypervisor environments, but I didn't find any here at the mailing list. So I am sharing mine. Hopefully it will help as a testing ground for someone else.
>>> 
>>> Testing was done on two nodes with latest ZFS and Xen hypervisor on both Windows and Linux DomUs, both inside and outside on Dom0. Hardware and setup details are mentioned as well. This is without any fancy ZIL or L2ARCs.
>>> https://gist.github.com/anonymous/7072470
>>> 
>>> While stress testing with prolonged iozone on DomUs, I noticed spl_kmem_cache spiking up, but didn't get softlocks and the load always came down.
>>> 
>>> If you have any thoughts or want to share your benchmarks and/or tips/tricks, please feel free to! Thanks.
>>> 
>>> To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
>> Not an expert here, but you may want to take an look on the fio benchmark, which is derived from iometer. It runs under windows and linux and can give you more real word results.
>> Personaly I use the iometer-file-access-server configuration from the examples section with ioengine=posixaio and direct=0 on zfs. To use an less random distribution for the test i also use random_distribution=zipf:1.2 which is an feature of fio 2.1.
>> Make sure you'r test file size exceeds your arc size. Before each run i also drop the os's caches with  "echo 3 > /proc/sys/vm/drop_caches".
> Another sidenote zfs also needs fallocate=none in the fio config.

Ok cool, I got it actually.

So bascally your file is like this;

[global]
description=Emulation of Intel IOmeter File Server Access Pattern
 
[iometer]
bssplit=8k/100
rw=randrw
rwmixread=50
direct=0
size=44g 
ioengine=posixaio
fallocate=none
random_distribution=zipf:1.2
# IOMeter defines the server loads as the following:
# iodepth=1     Linear
# iodepth=4     Very Light
# iodepth=8     Light
# iodepth=64    Moderate
# iodepth=256   Heavy
iodepth=32  

Were one would change the size to greater then ARC and iodepth to 64 or more in reality?

If say I've a 128GB ARC and set arc_max and arc_min to say 2G, then size could be 4g to really blow it out?  I don't feel like pulling RAM or running with a size=140g for example.

What do you think?

- aurf

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.



More information about the zfs-discuss mailing list