[zfs-discuss] Re: Not reading from L2ARC.

devsk devsku at gmail.com
Sun Jul 3 13:06:26 EDT 2011


What's the recommendation for recordsize? I know the records size is
variable depending on the file sizes. But doesn't the larger
recordsize waste potentially 64K of space towards the end of the file?

What would be the disadvantages of limiting recordsize to smaller
number like 8k?

-devsk


On Jul 3, 12:09 am, "Fajar A. Nugraha" <l... at fajar.net> wrote:
> On Sun, Jul 3, 2011 at 1:06 PM, slowrobot <slowro... at gmail.com> wrote:
> > I've done some more reading and learned that sequential reads won't
> > hit L2ARC, so that explains why the cat/dd/cp tests gave the results I
> > saw.  I'm still not sure why iozone's random read test wasn't hitting
> > the cache, but that might have more to do with iozone than with ZFS or
> > zfsonlinux.  I wrote a simple test program that loops and reads 4K
> > blocks from random offsets within my 16GB test file.  This test is
> > definitely hitting the cache, but turned up another mystery.  My test
> > program gets somewhere around 4MB/s reads, and zpool iostat shows the
> > physical disks mostly idle, with the l2arc doing 1000 IOP/s reads,
> > which matches the 4MB/s I get from the test program (since I'm reading
> > 4K at a time).  The weird thing is that, for throughput, zpool iostat
> > shows ~130MB/s reads on the cache device, which doesn't match up with
> > anything.
>
> Do you use the default recordsize 128k? If yes, 128k * 1000/s = 128MBps
>
> --
> Fajar



More information about the zfs-discuss mailing list