[zfs-discuss] ZFS zvols for KVM virtual machines storage

George Melikov mail at gmelikov.ru
Thu Dec 28 04:28:18 EST 2017

there will always be a dance with balancing between: small recordsize = more metadata, worse compress ratio but equal read/write block and vise versa.

In my humble opinion it's better to use larger recordsize/volblocksize (8-16k and larger)  and have small penalty rarely (because more often you will read more than 1 4k block).

Compression will help you with penalty too in this case, and the ratio will be better.

But real workload will always have the final word.
George Melikov,
Tel. 7-915-278-39-36
Skype: georgemelikov

С наилучшими пожеланиями,
Георгий Меликов,
mail at gmelikov.ru
Моб:         +7 9152783936
Skype:     georgemelikov

28.12.2017, 11:56, "Gena Makhomed via zfs-discuss" <zfs-discuss at list.zfsonlinux.org>:
> On 28.12.2017 8:51, Richard Elling via zfs-discuss wrote:
>>  Finally, in general, it is a bad idea to use recordsize or volblocksize = 4k, even if the phsycial_block_size = 4k.
>>  The overhead can be worse than the read/modify/write penalty for larger recordsize and nvme devices are rarely
>>  bandwidth limited. YMMV, so test if you can.
> Why? About which overhead you are talking?
> I am use ZFS zvols for KVM storage on server,
> virtual machines use block size 4k to read/write data.
> So I create all zvols for virtual machines with volblocksize = 4k:
> # zfs create -s -b 4K -V 1024G tank/vm-example-com
> When use volblocksize = 128k it will be huge write amplification
> and slow down all server.
> ZFS pool is mirror of two 4TB HDD:
> # zpool create -o ashift=12 -O compression=lz4 -O atime=off -O xattr=sa
> -O acltype=posixacl tank mirror
> ata-HGST_HUS724040ALA640_PN2331PBG32EMT-part4
> ata-HGST_HUS724040ALA640_PN2334PBH1TR5R-part4
> --
> Best regards,
>   Gena
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss

More information about the zfs-discuss mailing list