[zfs-discuss] "du -ksh ." taking hours to complete

Gregor Kopka (@zfs-discuss) zfs-discuss at kopka.net
Wed Nov 14 18:41:19 EST 2018


Hi Duval,

Am 14.11.2018 um 15:46 schrieb Durval Menezes:
> On Wed, Nov 14, 2018 at 11:34 AM Gregor Kopka (@zfs-discuss) via
> zfs-discuss <zfs-discuss at list.zfsonlinux.org
> <mailto:zfs-discuss at list.zfsonlinux.org>> wrote:
>
>     Am 14.11.2018 um 14:30 schrieb Edward Ned Harvey (zfsonlinux) via
>     zfs-discuss:
>     > Who knows, in your personal situation where you have a particular
>     > usage pattern and never reboot, having a SSD cache and "secondary
>     > cache=metadata" might be just as good for you personally as the
>     > special vdev would. *shrugs*. YMMV.
>     IIRC: L2ARC is a round-robin cache, as soon as enough data has been
>     written for it to wrap around it'll start discarding the entries that
>     had been written in the last pass.
>
>
> I'm pretty sure L2ARC is *not* round-robin but rather, well, ARC (just
> like it says in the name), so it would keep the (meta)data most
> recently/frequently used on the cache according to the ARC algorithm, no?
I read the code differently:
https://github.com/zfsonlinux/zfs/blob/ae3d8491427343904c66d69ba94731553727eb26/module/zfs/arc.c#L7933
speaks of a /write hand/ that is progressing through the device and the
implementation of l2arc_evict
(https://github.com/zfsonlinux/zfs/blob/ae3d8491427343904c66d69ba94731553727eb26/module/zfs/arc.c#L8542
- called from l2arc_feed_thread
https://github.com/zfsonlinux/zfs/blob/ae3d8491427343904c66d69ba94731553727eb26/module/zfs/arc.c#L9078)
just does a simple sweep through the buffers and unconditionally
destroys (or invalidates, in case an L1 buffer currently exists) all L2
headers that point into the region being evicted. Then new data is
written into the evicted space and the write hand is adjusted
accordingly (in
https://github.com/zfsonlinux/zfs/blob/ae3d8491427343904c66d69ba94731553727eb26/module/zfs/arc.c#L8959
with
https://github.com/zfsonlinux/zfs/blob/ae3d8491427343904c66d69ba94731553727eb26/module/zfs/arc.c#L8991
taking care of the wrap-round).

Looks like perfect round-robin to me: it dosn't seem to care at all when
evicting /if or how often/ that data had been read back into ARC since
it was written - it just gets rid of it, no questions asked, and
shouldn't it be in ARC the moment the write hand comes by it's no longer
cached /at all/.

Thinking about this for a moment makes me suspect that spending RAM on
real free space management for L2 (so frequently requested content could
be retained even if not in active use the moment evict comes by, instead
of being dropped unconditionally as-is currently) might be a good idea.
On the other hand the vdev special allocation classes might bring enough
speedup that L2 turns somewhat irrelevant anyway.

Gregor

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20181115/25d11990/attachment.html>


More information about the zfs-discuss mailing list