[zfs-discuss] ARC_RECLAIM99% CPU UTILIZATION

Fajar A. Nugraha list at fajar.net
Fri Jan 8 23:51:37 EST 2016


If it's easily repeatable, you might want to try building from git,
0.6.5 branch. In case it's a known fixed issue.

https://github.com/zfsonlinux/zfs/commits/zfs-0.6.5-release
https://github.com/zfsonlinux/zfs/archive/zfs-0.6.5-release.zip

-- 
Fajar

On Sat, Jan 9, 2016 at 9:04 AM, Frank Xenal via zfs-discuss
<zfs-discuss at list.zfsonlinux.org> wrote:
> Hello,
>
> Has anyone else seen this problem before?
>
> Any suggestion or help would be appreciated.
>
> Thanks
>
> On Fri, Jan 8, 2016 at 12:35 AM, Frank Xenal <xn9733 at gmail.com> wrote:
>>
>> Hello,
>>
>> Recently upgraded from 0.6.3.1 to  zfs-0.6.5.3-1.el6.x86_64
>>
>> Note: Reason for upgrading was that I was getting very high CPU usage with
>> "arc_adapt" and the NFS mounts would be non-responsive and would have no
>> choice but to reboot the system.
>>
>> Since the upgrade, now I'm seeing  "arc_reclaim" at a constant 99% cpu
>> utilization after very high NFS traffic -- mainly during nightly backups.
>> However, this time the nfs mounts are still responsive. I also see lots of
>> "arc_prune" processing running.
>>
>> System information:
>>
>> OS: Centos 6.7
>>        Linux 2.6.32-573.12.1.el6.x86_64 #1 SMP Tue Dec 15 21:19:08 UTC
>> 2015 x86_64 x86_64 x86_64 GNU/Linux
>> ZFS pools:   4 very active pools, each with 10 disks in raidz2
>> configuration
>> RAM: 256G
>> Note: this server is strictly an NFS server
>>
>>
>> Looking for some help with determining what is causing this. I'm concerned
>> that the system NFS mounts will eventually become unresponsive and will have
>> to reboot.
>>
>> Thanks!
>>
>>
>> Following are some zfs config and debug information:
>>
>> SLAB information:
>> ==============
>> --------------------- cache
>> -------------------------------------------------------  ----- slab ------
>> ---- object -----  --- emergency ---
>> name                                    flags      size     alloc slabsize
>> objsize  total alloc   max  total alloc   max  dlock alloc   max
>> spl_vn_cache                          0x00020     98304     48048     8192
>> 104     12    11    11    504   462   462      0     0     0
>> spl_vn_file_cache                     0x00020     65536     32928     8192
>> 112      8     7     7    336   294   294      0     0     0
>> spl_zlib_workspace_cache              0x00240         0         0  2144960
>> 268072      0     0     0      0     0     0      0     0     0
>> ddt_cache                             0x00040   2390784   2187328   199232
>> 24856     12    11    11     96    88    88      0     0     0
>> zio_buf_20480                         0x00042  27095040  19660800   200704
>> 20480    135   135   159   1080   960  1272      0     0     0
>> zio_data_buf_20480                    0x00042  45760512  25231360   200704
>> 20480    228   228   366   1824  1232  2928      0     0     0
>> zio_buf_24576                         0x00042  26849280  21037056   233472
>> 24576    115   115   142    920   856  1136      0     0     0
>> zio_data_buf_24576                    0x00042  47861760  29097984   233472
>> 24576    205   205   296   1640  1184  2368      0     0     0
>> zio_buf_28672                         0x00042  35676160  28901376   266240
>> 28672    134   134   134   1072  1008  1072      0     0     0
>> zio_data_buf_28672                    0x00042  43663360  31653888   266240
>> 28672    164   164   166   1312  1104  1328      0     0     0
>>
>>
>>
>>
>>
>> zfs.conf
>> =========
>> options zfs zfs_arc_min=10737418240
>> options zfs zfs_arc_max=68719476736
>>
>>
>> arcstat:
>> =========
>> 6 1 0x01 91 4368 12850802983 183440709950683
>> name                            type data
>> hits                            4    137887521
>> misses                          4    61047442
>> demand_data_hits                4    4931870
>> demand_data_misses              4    11851169
>> demand_metadata_hits            4    117889449
>> demand_metadata_misses          4    39705706
>> prefetch_data_hits              4    1086655
>> prefetch_data_misses            4    1184105
>> prefetch_metadata_hits          4    13979547
>> prefetch_metadata_misses        4    8306462
>> mru_hits                        4    71392731
>> mru_ghost_hits                  4    6220387
>> mfu_hits                        4    51428588
>> mfu_ghost_hits                  4    2599822
>> deleted                         4    57251112
>> mutex_miss                      4    276401
>> evict_skip                      4    11019879240
>> evict_not_enough                4    320113180
>> evict_l2_cached                 4    0
>> evict_l2_eligible               4    444038314496
>> evict_l2_ineligible             4    121226022912
>> evict_l2_skip                   4    0
>> hash_elements                   4    1319696
>> hash_elements_max               4    17601910
>> hash_collisions                 4    15187078
>> hash_chains                     4    25670
>> hash_chain_max                  4    7
>>
>>
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss
>


More information about the zfs-discuss mailing list