[zfs-discuss] ZFSOnLinux I/O Benchmarks

Achim Gottinger achim at ag-web.biz
Sun Oct 20 18:24:43 EDT 2013


Am 20.10.2013 19:41, schrieb Micky Martin:
> When setting up a test bed for ZFS, I was looking for benchmarks on 
> what to expect in hypervisor environments, but I didn't find any here 
> at the mailing list. So I am sharing mine. Hopefully it will help as a 
> testing ground for someone else.
>
> Testing was done on two nodes with latest ZFS and Xen hypervisor on 
> both Windows and Linux DomUs, both inside and outside on Dom0. 
> Hardware and setup details are mentioned as well. This is without any 
> fancy ZIL or L2ARCs.
> https://gist.github.com/anonymous/7072470
>
> While stress testing with prolonged iozone on DomUs, I noticed 
> spl_kmem_cache spiking up, but didn't get softlocks and the load 
> always came down.
>
> If you have any thoughts or want to share your benchmarks and/or 
> tips/tricks, please feel free to! Thanks.
>
> To unsubscribe from this group and stop receiving emails from it, send 
> an email to zfs-discuss+unsubscribe at zfsonlinux.org.
Not an expert here, but you may want to take an look on the fio 
benchmark, which is derived from iometer. It runs under windows and 
linux and can give you more real word results.
Personaly I use the iometer-file-access-server configuration from the 
examples section with ioengine=posixaio and direct=0 on zfs. To use an 
less random distribution for the test i also use 
random_distribution=zipf:1.2 which is an feature of fio 2.1.
Make sure you'r test file size exceeds your arc size. Before each run i 
also drop the os's caches with  "echo 3 > /proc/sys/vm/drop_caches".

achim~

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.



More information about the zfs-discuss mailing list