[zfs-discuss] About zvol performance drop

Daobang Wang wangdbang at 163.com
Wed May 11 21:19:14 EDT 2016


    Sorry to miss information, I used the SAS2008 HBA, not RAID-controller, the pool created with ashift=9.

I will change the configuration from raidz to mirror to test it again, thank you very much.

Best Regards,
Daobang Wang.

At 2016-05-11 14:47:41, "Sebastian Oswald" <s.oswald at autogassner.de> wrote:
>Just to make sure: the disks are directly attached to the system by
>a "dumb" HBA, not via RAID-controller as single-disk-raid0 or any
>other funky configuration? RAID-Controllers can't cope with how ZFS is
>handling the disks and may/will "do funny things". (though not funny for
>performance or your data...)
>Also the pool was created with 4k blocksize (ashift=12)?
>ZFS gets its speed from spreading I/O over all available VDEVs. Major
>rule of thumb: the more VDEVs the more performance (at reasonable
>numbers of VDEVs...)
>RAIDZs are a tradeoff between usable space, redundancy and speed -
>with priorities descending in this order.
>For high (random) I/O applications like VM-storage you should definately
>consider another disk layout. Maximum performance would be 4 x 2
>mirroring, which also gives the best flexibility (you can 2 disksk at a
>time), but only 50% usable space.
>A good tradeoff with usable space is 2xRAIDz1 with 4 disks each. You
>should benchmark both layouts and decide based on your requirements.
>Another important point is the L2ARC and SLOG. For high IOPS add an SSD
>backed L2 cache and SLOG. This gives by far the biggest performance
>boost for ZFS, especially when using it as a storage provider for
>multiple systems with relatively low memory on the storage system. 32GB
>is relatively low in ZFS-terms - always try to throw as much RAM at ZFS
>as technically and financially possible. 
>For SSD cache/SLOG make sure to only use proper server-grade SSDs as
>consumer SSDs will be hammered to death within a few months (I killed 2
>cheap 60GB SSDs in a test system within 3 months...). SATA/SAS SSDs are
>fine, NVMe or PCIe are much better.
>The backing spinny-disk layout should be improved anyways, because ZFS
>throttles writes if the SLOG is growing too big too fast because the
>VDEVs can't keep up.
>This behaviour is tuneable but in almost any case the defaults are
>perfectly fine and shouldn't be touched! Making things worse by tuning
>these parameters is far easier and common than actually gaining any
>In short:
>1. change your disk layout
>2. add SSD-backed L2ARC and SLOG
>3. ....
>4. profit
>Sebastian Oswald
>> Hi All,
>>     I setup an system(32GB DDR3), 8 SAS disks(15000 RPM ) created a
>> raidz1(sync disabled), and one 500GB zvol(sync disabled), qle2562 FC
>> HBA, exported the zvol with SCST 3.1.0(write back, fileio), the
>> client installed centos 6.5 X86_64, test command was  "fio
>> -filename=/dev/sdb -direct=1 iodepth=32 -thread -numjob=1
>> -ioengine=psync -rw=randwrite -bs=8k -size=64G -group_reporting
>> -name=fio_8k_64GB_RW", run the test command continuely, at the
>> starting, the speed was about 260MB/s(iostat -xm 1), but after
>> several times tested, performance dropped to 3MB/s.
>>     Would anybody give me a clue? Where is the root cause? How to
>> improve it?
>> Thank you very much.
>> Best Regards,
>> Daobang Wang.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20160512/66c52e4b/attachment.html>

More information about the zfs-discuss mailing list