[zfs-discuss] Very High IO with some loads compared to other file systems

Vladimir vladimir.elisseev at gmail.com
Thu May 2 04:42:07 EDT 2013


David,

Thanks for the tip! Unfortunately I didn't check l2_hdr_size when ARC was 
"full", but now with ARC is 1.8G form 2G, l2_hdr_size is 160Mb. In my setup 
2G ARC en 13G L2ARC, so I'm in doubt  that ARC size is limitation for my 
L2ARC. The problem appears in my case wen ARC reaches ~90%. If it uses less 
everything runs as expected. Below are some stats during kernel compilation:
ARC < 90%
read  hits  miss  hit%  l2read  l2hits  l2miss  l2hit%  arcsz  l2size  
 159   157     2    98       2       2       0     100   1.4G    5.7G  
 432   411    21    95      21      18       3      85   1.4G    5.7G  
 358   354     4    98       4       4       0     100   1.4G    5.7G  
 312   306     6    98       6       6       0     100   1.4G    5.7G  

ARC >90%
read  hits  miss  hit%  l2read  l2hits  l2miss  l2hit%  arcsz  l2size  
9.1K  5.6K  3.5K    61    3.5K    3.4K      21      99   1.8G    5.8G  
 13K  9.5K  4.1K    70    4.1K    4.0K      54      98   1.8G    5.8G  
9.2K  5.6K  3.5K    61    3.5K    3.3K     230      93   1.8G    5.8G  
8.9K  5.5K  3.4K    61    3.4K    3.4K      18      99   1.8G    5.8G  
 
I'd really interested in this case.

Regards,
Vlad.   


On Thursday, May 2, 2013 9:23:59 AM UTC+2, David Douard wrote:
>
> It might be related to this 
> https://github.com/zfsonlinux/zfs/issues/1420#issuecomment-16831663 
>
> If your ARC is too limited for your L2ARC available size, you put the ARC 
> under memory pressure when it's full. It seems this is a quite common 
> mistake that definitively should be documented in bold/red somewhere in the 
> FAQ.
>
> David
>  
>
>   
>
> On Thu, May 2, 2013 at 9:10 AM, Vladimir <vladimir... at gmail.com<javascript:>
> > wrote:
>
>> My system has 16G of ram, but I limited arc to 2Gb. I can observe this 
>> strange behaviour in my case only when ARC is "full". On your system ARC is 
>> using only 3.1G out of 8G. I think this makes the difference!
>>
>> Regards,
>> Vlad.
>>
>>
>> On Thursday, May 2, 2013 7:54:48 AM UTC+2, Jim Raney wrote:
>>>
>>>  
>>> For reference, here's the arcstat.pl on my lab system during a 3.7.10 
>>> compile:
>>>
>>> read  hits  miss  hit%  l2read  l2hits  l2miss  l2hit%  arcsz  l2size  
>>>   41    41     0   100       0       0       0       0   3.1G    7.0G  
>>>   93    92     1    98       1       1       0     100   3.1G    7.0G  
>>>  116   115     1    99       1       0       1       0   3.1G    7.0G  
>>>  127   127     0   100       0       0       0       0   3.1G    7.0G  
>>>  122   122     0   100       0       0       0       0   3.1G    7.0G  
>>>   82    82     0   100       0       0       0       0   3.1G    7.0G  
>>>   73    73     0   100       0       0       0       0   3.1G    7.0G  
>>>   43    43     0   100       0       0       0       0   3.1G    7.0G  
>>>  172   172     0   100       0       0       0       0   3.1G    7.0G  
>>>  216   213     3    98       3       3       0     100   3.1G    7.0G  
>>>   76    76     0   100       0       0       0       0   3.1G    7.0G  
>>>   75    75     0   100       0       0       0       0   3.1G    7.0G  
>>>   64    64     0   100       0       0       0       0   3.1G    7.0G  
>>>  160   160     0   100       0       0       0       0   3.1G    7.0G  
>>>  172   172     0   100       0       0       0       0   3.1G    7.0G  
>>>  129   129     0   100       0       0       0       0   3.1G    7.0G  
>>>   38    38     0   100       0       0       0       0   3.1G    7.0G  
>>>   41    41     0   100       0       0       0       0   3.1G    7.0G  
>>>  127   127     0   100       0       0       0       0   3.1G    7.0G  
>>>   54    54     0   100       0       0       0       0   3.1G    7.0G  
>>>
>>> iostat on the l2arc devices:
>>>
>>> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
>>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>>> sda               0.00     0.00    0.00    1.00     0.00     0.00     
>>> 0.00     0.00    0.00    0.00    0.00   0.00   0.00
>>> sdb               0.00     0.00    0.00    1.00     0.00     0.00     
>>> 0.00     0.00    0.00    0.00    0.00   0.00   0.00
>>>
>>> zpool iostat:
>>>
>>> # zpool iostat pool0 1
>>>                capacity     operations    bandwidth
>>> pool        alloc   free   read  write   read  write
>>> ----------  -----  -----  -----  -----  -----  -----
>>> pool0       24.0G   384G     29     67   305K   446K
>>> pool0       24.0G   384G      5    188  22.5K  1.54M
>>> pool0       24.0G   384G      6      1  91.9K  20.0K
>>> pool0       24.0G   384G      5      1  48.0K  36.0K
>>> pool0       24.0G   384G      3      1  58.0K  48.0K
>>> pool0       24.0G   384G      4    577  45.0K  3.08M
>>> pool0       24.0G   384G      9      1  48.5K  40.0K
>>> pool0       24.0G   384G      6      0  31.0K  7.99K
>>> pool0       24.0G   384G      4      0  30.5K  7.99K
>>> pool0       24.0G   384G     11      4  35.5K   136K
>>> pool0       24.0G   384G      5    477  9.99K  2.31M
>>> pool0       24.0G   384G      4      2  25.5K  58.0K
>>>
>>> As you can see l2arc is barely being touched.  With 2GB of arc I would 
>>> expect the same thing to happen on your system if no other I/O was going on 
>>> during a compile, esp. if you had compiled more than once one after 
>>> another.  Is your 2GB of arc RAM all of the RAM in the system? Or do you 
>>> have more?
>>>
>>> --
>>> Jim Raney
>>>  
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20130502/7736f435/attachment.html>


More information about the zfs-discuss mailing list