[zfs-discuss] Very High IO with some loads compared to other file systems

David Douard david.douard at gmail.com
Thu May 2 03:23:59 EDT 2013


It might be related to this
https://github.com/zfsonlinux/zfs/issues/1420#issuecomment-16831663

If your ARC is too limited for your L2ARC available size, you put the ARC
under memory pressure when it's full. It seems this is a quite common
mistake that definitively should be documented in bold/red somewhere in the
FAQ.

David




On Thu, May 2, 2013 at 9:10 AM, Vladimir <vladimir.elisseev at gmail.com>wrote:

> My system has 16G of ram, but I limited arc to 2Gb. I can observe this
> strange behaviour in my case only when ARC is "full". On your system ARC is
> using only 3.1G out of 8G. I think this makes the difference!
>
> Regards,
> Vlad.
>
>
> On Thursday, May 2, 2013 7:54:48 AM UTC+2, Jim Raney wrote:
>>
>>
>> For reference, here's the arcstat.pl on my lab system during a 3.7.10
>> compile:
>>
>> read  hits  miss  hit%  l2read  l2hits  l2miss  l2hit%  arcsz  l2size
>>   41    41     0   100       0       0       0       0   3.1G    7.0G
>>   93    92     1    98       1       1       0     100   3.1G    7.0G
>>  116   115     1    99       1       0       1       0   3.1G    7.0G
>>  127   127     0   100       0       0       0       0   3.1G    7.0G
>>  122   122     0   100       0       0       0       0   3.1G    7.0G
>>   82    82     0   100       0       0       0       0   3.1G    7.0G
>>   73    73     0   100       0       0       0       0   3.1G    7.0G
>>   43    43     0   100       0       0       0       0   3.1G    7.0G
>>  172   172     0   100       0       0       0       0   3.1G    7.0G
>>  216   213     3    98       3       3       0     100   3.1G    7.0G
>>   76    76     0   100       0       0       0       0   3.1G    7.0G
>>   75    75     0   100       0       0       0       0   3.1G    7.0G
>>   64    64     0   100       0       0       0       0   3.1G    7.0G
>>  160   160     0   100       0       0       0       0   3.1G    7.0G
>>  172   172     0   100       0       0       0       0   3.1G    7.0G
>>  129   129     0   100       0       0       0       0   3.1G    7.0G
>>   38    38     0   100       0       0       0       0   3.1G    7.0G
>>   41    41     0   100       0       0       0       0   3.1G    7.0G
>>  127   127     0   100       0       0       0       0   3.1G    7.0G
>>   54    54     0   100       0       0       0       0   3.1G    7.0G
>>
>> iostat on the l2arc devices:
>>
>> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>> sda               0.00     0.00    0.00    1.00     0.00     0.00
>> 0.00     0.00    0.00    0.00    0.00   0.00   0.00
>> sdb               0.00     0.00    0.00    1.00     0.00     0.00
>> 0.00     0.00    0.00    0.00    0.00   0.00   0.00
>>
>> zpool iostat:
>>
>> # zpool iostat pool0 1
>>                capacity     operations    bandwidth
>> pool        alloc   free   read  write   read  write
>> ----------  -----  -----  -----  -----  -----  -----
>> pool0       24.0G   384G     29     67   305K   446K
>> pool0       24.0G   384G      5    188  22.5K  1.54M
>> pool0       24.0G   384G      6      1  91.9K  20.0K
>> pool0       24.0G   384G      5      1  48.0K  36.0K
>> pool0       24.0G   384G      3      1  58.0K  48.0K
>> pool0       24.0G   384G      4    577  45.0K  3.08M
>> pool0       24.0G   384G      9      1  48.5K  40.0K
>> pool0       24.0G   384G      6      0  31.0K  7.99K
>> pool0       24.0G   384G      4      0  30.5K  7.99K
>> pool0       24.0G   384G     11      4  35.5K   136K
>> pool0       24.0G   384G      5    477  9.99K  2.31M
>> pool0       24.0G   384G      4      2  25.5K  58.0K
>>
>> As you can see l2arc is barely being touched.  With 2GB of arc I would
>> expect the same thing to happen on your system if no other I/O was going on
>> during a compile, esp. if you had compiled more than once one after
>> another.  Is your 2GB of arc RAM all of the RAM in the system? Or do you
>> have more?
>>
>> --
>> Jim Raney
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20130502/3a6b0af1/attachment.html>


More information about the zfs-discuss mailing list