[zfs-discuss] Help with estimated memory need

swami at petaramesh.org swami at petaramesh.org
Sun Mar 11 12:40:14 EDT 2012


Hi Niels,

On Sun, 11 Mar 2012 14:17:49 +0100, Niels de Carpentier wrote:
> If you don't have enough memory for your dedup tables, ANY write to 
> the
> zvol (even if just atime) is going to be incredably slow.

That's exactly my feeling, and I'm trying to figure out how much RAM I 
would need to avoid this...

> Solaris requires 2.6GB  memory per TB data in the best case (only 
> 128Kb
> blocks.) You probably want to double that because of inefficiencies 
> in the
> zfs on linux implementation.

So this might be the answer I was looking for ?

> Normally metadata (this includes the dedup tables) is limited to 25% 
> of
> the arc! So you'll need a lot of memory (or a fast ssd), and you'll 
> need
> to tune the arc so most can be used for metadata.

Should I understand that I would need 2.6GB * 2 * 4 = 20 GB RAM only 
for ZFS arc for processing properly a 1 TB Zpool with dedup ? That would 
be HUGE RAM requirements !

> dedup under linux is very much untested.

I've been actually using it daily for more than 6 months. Dreadfully 
slow, getting slower every single day, but it works and has shown 
perfectly stable...

Without deduplication, what would be your RAM advice for correct 
performance ? - but I would need to at least double my online storage...

TIA, kind regards.



More information about the zfs-discuss mailing list