[zfs-discuss] Re: Question around expected performance

Christ Schlacta aarcane at aarcane.org
Wed Jul 20 20:52:27 EDT 2011


I did a similar experiment using dedup and compression.  if you have 
dedup enabled, this is to be expected.
othwise, I encountered bus limitations on a similar setup.  the machine 
could not saturate 6 SATA disks on the PCI bus.  if you have any 
possible bottlenecks there, that is worth investigating.  (I updated 
that system and can get 300MB/S now, 50*6).  your numbers and reported 
speeds look very similar to what I experienced with dedup from dev/zero
On 7/20/2011 17:01, Selim wrote:
> Is it possible to change the ashift value of an existing pool?  I
> created my pool using the defaults, i.e., no explicit ashift value.
>
> My current write speed is between 50 and 80MB/s going over a 7 disk
> array with 4 disks on one controller and 3 disks on another.
>
> This is measured while doing the following:
> dd if=/dev/zero of=/raid/data/Downloads/16gig.bin bs=1M count=16000
> zpool iostat -v 2
>
> Sample output:
>                                       capacity     operations
> bandwidth
> pool                              alloc   free   read  write   read
> write
> --------------------------------  -----  -----  -----  -----  -----
> -----
> raid                               303G  6.07T     16  1.02K  16.5K
> 63.5M
>    raidz1                           303G  6.07T     16  1.02K  16.5K
> 63.5M
>      wwn-0x50014ee257acfd73-part2      -      -      0    497  32.0K
> 10.7M
>      wwn-0x50014ee257ad0fc1-part2      -      -      1    505  95.9K
> 10.7M
>      wwn-0x50014ee257ad0cd5-part2      -      -      0    501  64.0K
> 10.7M
>      wwn-0x50014ee20201fa4d-part2      -      -      0    499  64.0K
> 10.7M
>      wwn-0x50014ee202579b4f-part2      -      -      0    529  32.0K
> 10.6M
>      wwn-0x50014ee202577a99-part2      -      -      0    521  32.0K
> 10.6M
>      wwn-0x50014ee257accab4-part2      -      -      0    518  32.0K
> 10.6M
> --------------------------------  -----  -----  -----  -----  -----
> -----
>
> On Jul 20, 3:59 pm, "Manuel Amador (Rudd-O)"<rud... at rudd-o.com>
> wrote:
>> First megabyte is a sane default choice for both SSDs and HDDs.
>>
>> In my case, this is my partition layout in my SSD:
>>
>> --------------------------------------------
>>
>> ~... at karen.dragonfear α:
>> fdisk /dev/sda
>>
>> Command (m for help): p
>>
>> Disk /dev/sda: 128.0 GB, 128035676160 bytes
>> 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x066527da
>>
>>     Device Boot      Start         End      Blocks   Id  System
>> /dev/sda1            2048     4196351     2097152   83  Linux
>> /dev/sda2         4196352    12585021     4194335   82  Linux swap / Solaris
>> /dev/sda3        12585022   250069679   118742329   bf  Solaris
>>
>> --------------------------------------------
>>
>> And this is my partition layout in one of the legs of my NAS with HDDs
>> (the WDEARS disks):
>>
>> --------------------------------------------
>>
>> [root at paola rudd-o]# fdisk /dev/sda
>>
>> WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
>>
>> Command (m for help): p
>>
>> Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
>> 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
>> Units = sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disk identifier: 0x49d5b605
>>
>>     Device Boot      Start         End      Blocks   Id  System
>> /dev/sda1            2048     2099199     1048576   83  Linux
>> /dev/sda2         2099200     8390655     3145728   82  Linux swap / Solaris
>> /dev/sda3         8390656  3907029167  1949319256   bf  Solaris
>>
>> --------------------------------------
>>
>> Both systems run with ashift=12.  Both systems push in excess of 230
>> MB/s reads and ~80 MB/s writes.
>>
>> IT'S A BULLET TRAIN.  And finally a sane choice for a root file system
>> too!
>>
>>
>>
>>
>>
>>
>>
>> On Tue, 2011-07-19 at 17:01 -0700, Brian Behlendorf wrote:
>>> Very nice, I like seeing zfs push each of those disks to over 100MB/s!
>>> What was the correct alignment you eventually settled on?  By default
>>> when given a full disk zfsonlinux aligns the start of the pool to the
>>> first megabyte boundary.  If there's a better default choice for
>>> rotational media it would be nice to know.



More information about the zfs-discuss mailing list