l2arc - why not attempt to make it persistent?

Brian Behlendorf behlendorf1 at llnl.gov
Mon May 23 15:51:28 EDT 2011


This is a good idea which has come up a few times.  Sun/Oracle has
mentioned previously that it's a feature they would like to see added.
Unfortunately, it is feature which would primarily benefit desktops and
laptops which reboot often.  It's less of an issue for servers which
reboot rarely and should usually have a hot cache.  I think it's pretty
clear that features which benefit servers have historically gotten the
most development effort.

However, it remains a good idea!  We can certainly add it to our feature
wish list, and perhaps someone will decide to work on it.

-- 
Thanks,
Brian 

On Sun, 2011-05-22 at 08:05 -0700, Uncle Stoatwarbler wrote:
> Given that:
> 
> 1: If there is a cache device problem the cache device gets dumped.
> 2: If there is a cache object inconsistency with what's on disk then the 
> cached objects get dumped.
> 3: if there are any other kinds of inconsistencies then the disk wins
> 
> Why not attempt to preserve the l2arc cache across reboots/mounts and as 
> an effect, (hopefully) speed up access from the outset rather than 
> having to wait for the cache to build out again? (if there are any 
> problems, the cache should be dumped, but if it's intact, why dump it?)
> 
> I've just been benchmarking my (rather slow, cheap) ssds vs disks and 
> even for the worst case scenarios there are 5-10 times the speed of the 
> spinning media for all except sequential writes (where they are only 
> twice the speed)
> 
> My concern partly stems from the limited write cycles of ssds 
> (especially cheap ones) and having just seen 40Gb of cache (on a 64Gb 
> cache device) being dumped, but also from the realization that my system 
> is significantly slower after a reboot, until the cache is repopulated 
> (zfs on root)
> 
> Alan
> 



More information about the zfs-discuss mailing list