[zfs-discuss] Relation between L2ARC and RAM size
richard.elling at richardelling.com
Tue Feb 6 20:41:01 EST 2018
> On Feb 6, 2018, at 5:12 AM, Amir Christopher Najmi via zfs-discuss <zfs-discuss at list.zfsonlinux.org> wrote:
> Every record in the l2arc consumes 70 Bytes of space in the ARC.
> Generally, the l2arc attempts to cache only random IO, so as a first cut on size you might try assuming all of the records are 4K.
This is not quite true. There is no information in the block related to its relationship with any other block.
Nor is there any size check for blocks destined for L2ARC. But this is ok, by the time the block is old
enough to make it to L2ARC, it probably needs caching.
> 1TB drive = 10^9 Bytes ( decimal power!)
> (10^9 Bytes / 4096 kB per record) * (70 Bytes per Record) * ( 1/1024 Bytes per kB) * (1/1024 Bytes per MB) = 16.3 MB of memory
Also, the size of the L2ARC header is stored in the "l2_hdr_size" entry of /proc/spl/kstat/zfs/arcstats,
so you can observe it on a live system. Each L2ARC header is also attached to an ARC buffer header,
but that size is not separated out from the L1ARC header in the kstat "hdr_size" Regardless, both can
be observed and tracked over time.
Finally, with compressed ARC, the data in L2ARC can be compressed. This makes predictions more
difficult, but you can rely on the measurements.
> Without accounting for any of the details of things like kernel page sizing etc.
>> On Feb 6, 2018, at 6:52 AM, Ali Hamid via zfs-discuss <zfs-discuss at list.zfsonlinux.org> wrote:
>> Hi friends,
>> anyone know how many RAM is required for a L2ARC( 1 TB SSD )?
>> do exist formula for calculate it?
>> zfs-discuss mailing list
>> zfs-discuss at list.zfsonlinux.org
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
More information about the zfs-discuss