[zfs-discuss] ZFS on Large SSD Arrays

aurfalien aurfalien at gmail.com
Tue Oct 29 17:13:01 EDT 2013


On Oct 29, 2013, at 2:03 PM, Doug Dumitru wrote:

> 
> On Tuesday, October 29, 2013 1:44:08 PM UTC-7, Niels de Carpentier wrote:
> > I have been doing some testing on large SSD arrays.  The configuration is: 
> 
> [ ... snipped ... ]
> > My bigger concern is what is happening to the drives underneath.  During 
> > this test above, I watched the underlying devices with 'iostat' and they 
> > were doing 365.27MB/sec of "actual writes" at the drive level.  This is a 
> > "wear amplification" of more than 5:1.  For SSDs wear amplification is 
> > important because it directly leads to flash wear out. 
> 
> Likely the ashift was automatically set to 13 (8K), which causes each 4K 
> block to be written as 8K. Make sure to specify ashift=12 when creating 
> the pool. Also sync writes will by default first be written to the ZIL 
> (the normal array is used if no ZIL is specified), so that's a doubling of 
> the writes as well. Set the zvol logbiad property to throughput to disable 
> this. 
> 
> ashift is at 12 (4k).  I re-ran the test with logbias set as throughput for the zvol and it had no impact.
>  
> > 
> > Just for fun, i re-ran the above tests with the zpool configured at 
> > raidz3.  With triple parity raid, the wear amplification jumped to 23:1. 
> 
> Yes, with 4K blocks this is essentially a 4 way mirror, and so will write 
> 4 times the amount of data. You can use striped mirrors for redundancy and 
> better performance. 
> 
> I expect 4x redundancy, but not 4x space usage.  This is supposed to be "parity" so it should be data+3.  
> 
> > 
> [ ... snipped ... ]
> 
> > Comments on tuning for pure SSD arrays would be very welcome.  I have seen 
> > some posts that imply the same issues have been seen elsewhere.  For 
> > example, someone tried a pair of Fusion-IO cards and was stuck at 
> > 400GB/sec 
> > on writes.  This setup is larger/faster than a pair of Fusion cards. 
> > Again, any help would be appreciated. 
> 
> Increasing zfs_vdev_max_pending fixed the issue for the Fusion-IO cards, 
> but I'm not sure if that can be set for zvol's. 
> 
> I set this to 64 (from10) in /sys/modules/zfs/parameters at it "seemed" to accept a new value.   No change in write performance or underlying writes.  Oh well.
> 
> 
> Niels 
> 
> Again, thank you for the reply.  I am really trying to understand the issues here.

Curious, not to dilute the thread but could it be a limit of whatever interface you are using for the SSDs?

I've also noticed a 30% diff in my tests; fio, dd, iozone with simply manipulating power settings in BIOS and the system (whatever distro you are using).

- aurf

To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20131029/e55e1592/attachment.html>


More information about the zfs-discuss mailing list