[zfs-discuss] ...couple of issues/misunderstandings, hoping for clarification...

Christ Schlacta aarcane at aarcane.org
Tue Apr 10 15:34:04 EDT 2012

As your other issues have been addressed, I'll take a moment to address 
the one that wasn't below.
On 4/10/2012 05:51, UbuntuNewbie wrote:
> Hi list,
>   while augmenting my use of the lovely zfs filesystem,i few questions/
> observations arose:
> - what does the snapshot.property "compression" mean? Isnt this really
> a filesystem property? What is it for?
> - when i moved a volume (zfs create -V...) around by means of the
> rename sub-comand, i found: neither the link at /dev/zvol nor the one
> at /dev/<poolname>/<volpath>  got updated. I was able to work around
> this one by exporting and reimporting the pool.
> - after taking a snapshot from a volume, the used size for the volume
> immediately doubles (1x volume + 1x per snapshot). This does not take
> into consideration, how many changes - if any - are made to the
> volume. Is this just a miscalculation or doesnt zfs use COW at the
> block level only (as would some dedup-algorithm do)? That was
> confusing and unexpected!
> - After creating a pool with dev/disk/by-id names, exporting it and
> reimporting, the names were replaced by names like sdb1. So what is
> the benefit from finding the by-id names in the first place? I was
> expecting those names to stay to avoid issues due to renaming of
> devices depending on the connecting sequence or caused by connecting
> different external drives to the same port.
When you create or import a device using /dev/disk/by-id names, the 
names are used across system reboots, etc.  If, however, you export the 
pool, those names are not used again on import, and the disks are found 
wherever, usually /dev/[hs]dxn.  If you want to continue to use the 
/dev/disk/by-id names, you have to import -d /dev/disk/by-id on import.  
They will then again persist across reboots, shutdowns, and even disk 
reorganizations/controller swaps.  If you only use /dev/sdxn or import 
without specifying -d, then some controllers, drivers, or combinations 
thereof will reorder disks on boot, resulting in unpredictable device 
node <-> disk correlations, which causes issues with ZFS importing pools 
at boot time, and is bad, mmkay?
> Anyone?
> cheeers, U.N.

More information about the zfs-discuss mailing list