[zfs-discuss] Many time more write IOPS after upgrade from 0.6.4.2 to 0.6.5.6

Phil Harman phil.harman at gmail.com
Wed May 11 06:35:21 EDT 2016


In my experience metaslab fragmentation reduces writes and increases reads (i.e. back in the olden days of space map thrashing, we'd do a lot of extra reads looking for a metaslab with sufficient space for the next write).

If reads are better, that might indicate that more of your ARC is being used as a read cache, and less for asynchronous writes.

I haven't tracked the changes in the versions you are using, but you might like to play with some of the write caching and throttling parameters.

> On 11 May 2016, at 10:42, Erik Jan Hofstede via zfs-discuss <zfs-discuss at list.zfsonlinux.org> wrote:
> 
> Anybody got any insights on this? Disk are way more utilized than they where on 0.6.4.2. I looks like heavy metaslab fragmentation but it only occurs after updating to 0.6.5.6. 
> 
> To give you a better view on this behaviour: http://i.imgur.com/OrfsOBg.png
> 
> On the 1st of may I changed zfs_txg_timeout from 5 to 10, what seems to provide some relieve.
> 
> Any help is appreciated!
> 
> Kind regards,
> 
> Erik Jan
> 
> 2016-05-01 19:36 GMT+02:00 Erik Jan Hofstede <ejhofstede at antagonist.nl>:
>> Hi,
>> 
>> I'm experiencing performance issues after upgrading from 0.6.4.2 to 0.6.5.6. Reads are actually a lot faster, there's much more cached data in ARC and the realignment of task priorities helps too. Writes are terrible however. Before the upgrade disk utilization was around 5%, after the upgrade this is 40% on the same workload. What surprises me is that ZFS is issuing a lot more write IOPS than before. Anybody can help me with this?
>> 
>> zfs list:
>> NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
>> userdata  2.72T  2.03T   704G         -    60%    74%  1.00x  ONLINE  - 
>> 
>> zpool status:
>>   pool: userdata
>>  state: ONLINE
>> status: Some supported features are not enabled on the pool. The pool can
>> 	still be used, but some features are unavailable.
>> action: Enable all features using 'zpool upgrade'. Once this is done,
>> 	the pool may no longer be accessible by software that does not support
>> 	the features. See zpool-features(5) for details.
>>   scan: scrub repaired 0 in 25h35m with 0 errors on Tue Apr 12 18:40:36 2016
>> config:
>> 
>> 	NAME                                             STATE     READ WRITE CKSUM
>> 	userdata                                         ONLINE       0     0     0
>> 	  mirror-0                                       ONLINE       0     0     0
>> 	    scsi-35000c50059117303                       ONLINE       0     0     0
>> 	    scsi-35000c500626018f7                       ONLINE       0     0     0
>> 	  mirror-1                                       ONLINE       0     0     0
>> 	    scsi-35000c50062602c3b                       ONLINE       0     0     0
>> 	    scsi-35000c500625fb53f                       ONLINE       0     0     0
>> 	  mirror-2                                       ONLINE       0     0     0
>> 	    scsi-35000c500625ff1e3                       ONLINE       0     0     0
>> 	    scsi-35000c500591ae2a7                       ONLINE       0     0     0
>> 	logs
>> 	  mirror-3                                       ONLINE       0     0     0
>> 	    scsi-SATA_INTEL_SSDSC2BA1BTTV335106X9100FGN  ONLINE       0     0     0
>> 	    scsi-SATA_INTEL_SSDSC2BA1BTTV335403VE100FGN  ONLINE       0     0     0
>> 	cache
>> 	  scsi-35e83a9710005e4fb-part1                   ONLINE       0     0     0
>> 	spares
>> 	  scsi-35000c500591a75bf                         AVAIL
>> 
>> zfs get all userdata:
>> NAME      PROPERTY              VALUE                  SOURCE
>> userdata  type                  filesystem             -
>> userdata  creation              Tue Nov  4  9:19 2014  -
>> userdata  used                  2.03T                  -
>> userdata  available             153G                   -
>> userdata  referenced            30K                    -
>> userdata  compressratio         1.10x                  -
>> userdata  mounted               no                     -
>> userdata  quota                 2.18T                  local
>> userdata  reservation           none                   default
>> userdata  recordsize            128K                   default
>> userdata  mountpoint            /userdata              default
>> userdata  sharenfs              off                    default
>> userdata  checksum              on                     default
>> userdata  compression           off                    default
>> userdata  atime                 on                     default
>> userdata  devices               on                     default
>> userdata  exec                  on                     default
>> userdata  setuid                on                     default
>> userdata  readonly              off                    default
>> userdata  zoned                 off                    default
>> userdata  snapdir               hidden                 default
>> userdata  aclinherit            restricted             default
>> userdata  canmount              off                    local
>> userdata  xattr                 on                     default
>> userdata  copies                1                      default
>> userdata  version               5                      -
>> userdata  utf8only              off                    -
>> userdata  normalization         none                   -
>> userdata  casesensitivity       sensitive              -
>> userdata  vscan                 off                    default
>> userdata  nbmand                off                    default
>> userdata  sharesmb              off                    default
>> userdata  refquota              none                   default
>> userdata  refreservation        none                   default
>> userdata  primarycache          all                    default
>> userdata  secondarycache        all                    default
>> userdata  usedbysnapshots       0                      -
>> userdata  usedbydataset         30K                    -
>> userdata  usedbychildren        2.03T                  -
>> userdata  usedbyrefreservation  0                      -
>> userdata  logbias               latency                default
>> userdata  dedup                 off                    default
>> userdata  mlslabel              none                   default
>> userdata  sync                  standard               default
>> userdata  refcompressratio      1.00x                  -
>> userdata  written               0                      -
>> userdata  logicalused           2.23T                  -
>> userdata  logicalreferenced     15K                    -
>> userdata  filesystem_limit      none                   default
>> userdata  snapshot_limit        none                   default
>> userdata  filesystem_count      none                   default
>> userdata  snapshot_count        none                   default
>> userdata  snapdev               hidden                 default
>> userdata  acltype               off                    default
>> userdata  context               none                   default
>> userdata  fscontext             none                   default
>> userdata  defcontext            none                   default
>> userdata  rootcontext           none                   default
>> userdata  relatime              on                     local
>> userdata  redundant_metadata    all                    default
>> userdata  overlay               off                    default
>> 
>> 
>> -- 
>> Kind regards,
>> Met vriendelijke groet,
>> 
>> Erik Jan Hofstede
>> Antagonist B.V.
> 
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20160511/5c8a07ea/attachment-0001.html>


More information about the zfs-discuss mailing list