[zfs-discuss] Dedup tables size in RAM ?

swami at petaramesh.org swami at petaramesh.org
Mon Apr 30 09:23:20 EDT 2012


Thank you very much Aneurin,

With this I now know it's worth purchasing another 4GB RAM in my case 
:-)

Now Id' love to know how I can tell whether or not I have L2ARC, its 
size, and how to fine-tune things the best way.

I'm not sure whether there is relevant doc for the Linux native ZFS for 
that ? Where should I set parameters, on what basis etc ?

Kind regards.


On Mon, 30 Apr 2012 11:31:35 +0100, Aneurin Price wrote:
> On 29 April 2012 10:04, Swâmi Petaramesh <swami at petaramesh.org> 
> wrote:
>> Hi guys,
>>
>> I'm back trying to determine how much RAM I would need for keeping 
>> my
>> dedup tables in RAM.
>>
>> "zdb mypool" gives the following output, but I don't know how to
>> interpret this in simple terms of "HOw much RAM do I need and which
>> parameter should I tweak ?"
>>
>
>> DDT-sha256-zap-duplicate: 2095107 entries, size 324 on disk, 176 in 
>> core
>> DDT-sha256-zap-unique: 3344179 entries, size 305 on disk, 167 in 
>> core
>
> If I understand correctly:
>
> 2095107*176 + 3344179*167~= 884MB
> 2095107*324 + 3344179*305 ~=1620MB
>
> So if you have an L2ARC (highly recommended for dedup) then you need
> about 1.6GB to store the DDT, plus nearly 900MB of RAM dedicated to
> storing the references to DDT entries. Of course, this is likely to
> grow, and it only covers deduplication - you want more for caching
> other data and metadata.
>
> If you don't have an L2ARC, I'm not sure if you need 884+1620=2504MB
> or just the 1620MB in RAM - probably the higher number.
>
> Basically, I'd look at planning for at least 2GB of memory usage over
> what you would be using on this pool without dedup.
>
> Note also that dedup can still be very slow even if you have enough
> RAM for the dedup table. I imagine it's probably fine if you have a
> decent desktop or server CPU, but on my Atom it's pretty 
> excruciating.



More information about the zfs-discuss mailing list