[zfs-discuss] ZFS Does Not Free Up Space
kandrey89 at gmail.com
Sat May 28 03:18:49 EDT 2016
Thank you Hajo,
indeed, I missed this fact when I upgraded from 0.6.5.2 to 0.6.5.6.
Didn't know how to change the module parameters, but this gave me a clue.
modprobe zfs spa_slop_shift=6
then modified /sys/module/zfs/parameters/spa_slop_shift to be "6" instead
Now I have 13GB of free space. :D
In the process of expanding storage though.
When the pools are dozens of TB large, can someone explain why you'd want
1%,2%,3% of your pool to be reserved? Why not reserve like 20GB, it's not
like you're going to be copying-on-write more than 20GB at once... or maybe
I'm wrong, what sets the limit? Copy-on-Write a very large file, like
100GB, would required at least 100GB reserved space, is that how it works,
does it copy the single file first or do it in blocks?
On Fri, May 27, 2016 at 11:53 PM, Hajo Möller <dasjoe at gmail.com> wrote:
> ZFS module parameter spa_slop_shift's default value changed from 6 to 5 a
> few releases ago, change it back to 6 to return to the previous behavior.
> See man zfs-module-parameters:
> *spa_slop_shift* (int)
> Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space in
> the pool to be consumed. This ensures that we don't run the pool completely
> out of space, due to unaccounted changes (e.g. to the MOS). It also limits
> the worst-case time to allocate space. If we have less than this amount of
> free space, most ZPL operations (e.g. write, create) will return ENOSPC.
> Default value: 5
> Hajo Möller
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the zfs-discuss