[zfs-discuss] ZFS consuming twice ARC's size and crashing system

Niels de Carpentier zfs at decarpentier.com
Mon Aug 19 15:38:36 EDT 2013

> Would it be feasible to have some sort of 'preallocation' zfs module
> parameter, so that the effects of fragmentation are limited? I'm
> thinking something along the lines of it doing a huge malloc up-front,
> maybe some large percentage of arc_max, and never letting go of that.
> It may be wasteful, but it seems like most people running zfs are
> already assuming it will use that memory and building their machines
> as such. Perhaps it would be easier to just fix the problem outright,
> but I'm wondering if that might be a valid band-aid.

No, unfortunately that won't help. The problem is that you need to store
objects of different sizes and different lifetimes, and are not able to
move them. In time this will always lead to fragmentation. (Say you fill
the cache with 512B blocks, randomly remove half of them, and want to fill
the freed space with 128kB blocks.) There is also a lot of overhead, as
there are alignment requirements which lead to a lot of wasted space.

One thing that might work is to count the allocated memory for the
arc_size instead of the size used by the objects. This won't fix the core
issue, but will prevent the arc using more memory then arc_max. I suspect
it's not easy to do though without major code changes.


To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.

More information about the zfs-discuss mailing list