[zfs-discuss] ZFS consuming twice ARC's size and crashing system

Sander Smeenk ssmeenk at freshdot.net
Thu Aug 15 03:41:28 EDT 2013


Quoting Trey Dockendorf (treydock at gmail.com):

> The metadata system has got hung (root's ext4 with
> hung_task_timeout_secs errors) already twice in the past 3 days.
> While transferring ~58TB I began to notice the 2 storage servers and
> single metadata server getting below less than 5% free memory.

I'm seeing the exact same behaviour on our ZoL-storage server.
This server has 192GB(!) memory and only a mere 26T pool (which is a
mirror vdev across two iscsi LUNs at this moment).

We run ~25 concurrent rsyncs and mostly write data to the pool. From
time to time memory consumption spikes way past any limits set and the
server grinds to a slow halt. It's had to put a finger on what actually
triggers this.

We've tried lowering arc_size but that seemed fruitless.

What does *seem* to help, but might drastically impact performance if you
read a lot from your pool, is asking Linux to drop it's cached memory:

  # sync
  # echo 3 > /proc/sys/vm/drop_caches

When i did the above on the server while it was getting memory bound, i
got about 180GB of memory back. 


> The systems doing storage (and shown in output below) have 64GB RAM, no 
> dedup, no compression.  Zpool configuration [3] is two 10-disk RAIDZ2's 
> with mirrored zil and striped cached.  Both zil and cache are SSD (though 
> they share the same SSDs, just partitioned separately).

Our pool does compression (lz4) and has no zil/cache attached (yet).


Aparently the above is a know 'bug'.  People on this list have stated
that 'ARC [memory] fragmentation can cause the ARC to grow to 2/3x the
allowed size'.


-Sndr.
-- 
| Two blondes walk into a building.
| You'd think at least one of them would have seen it.
| 4096R/20CC6CD2 - 6D40 1A20 B9AA 87D4 84C7  FBD6 F3A9 9442 20CC 6CD2



More information about the zfs-discuss mailing list