l2arc - why not attempt to make it persistent?

Uncle Stoatwarbler stoatwblr at gmail.com
Sun May 22 11:05:51 EDT 2011

Given that:

1: If there is a cache device problem the cache device gets dumped.
2: If there is a cache object inconsistency with what's on disk then the 
cached objects get dumped.
3: if there are any other kinds of inconsistencies then the disk wins

Why not attempt to preserve the l2arc cache across reboots/mounts and as 
an effect, (hopefully) speed up access from the outset rather than 
having to wait for the cache to build out again? (if there are any 
problems, the cache should be dumped, but if it's intact, why dump it?)

I've just been benchmarking my (rather slow, cheap) ssds vs disks and 
even for the worst case scenarios there are 5-10 times the speed of the 
spinning media for all except sequential writes (where they are only 
twice the speed)

My concern partly stems from the limited write cycles of ssds 
(especially cheap ones) and having just seen 40Gb of cache (on a 64Gb 
cache device) being dumped, but also from the realization that my system 
is significantly slower after a reboot, until the cache is repopulated 
(zfs on root)


More information about the zfs-discuss mailing list