[zfs-discuss] Re: 6x drive raidz1 Ubuntu 12.04 performance problems

Gordan Bobic gordan.bobic at gmail.com
Sun Jun 3 04:53:21 EDT 2012


On 06/02/2012 07:21 PM, DC wrote:
> Interesting additional note.  None of the drives have Multi-sector
> transfers turned on ...
>
> $ sudo hdparm -i /dev/sda
>
> /dev/sda:
>
>   Model=WDC WD20EARS-00J2GB0, FwRev=80.00A80, SerialNo=WD-WCAYY0289805
>   Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs
> FmtGapReq }
>   RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50
>   BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=off
>   CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=3907029168
>   IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
>   PIO modes:  pio0 pio3 pio4
>   DMA modes:  mdma0 mdma1 mdma2
>   UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6
>   AdvancedPM=no WriteCache=enabled
>   Drive conforms to: Unspecified:  ATA/ATAPI-1,2,3,4,5,6,7
>
>   * signifies the current active mode
>
> Yet the boot drive, an old crappy 100G 5400rpm drive, does have it
> turned on.  So do a pair of old 250G WD2500SD drives.  Why not the
> WD20EARS or Hitachi drives though ?
>
> I remember setting this up aeons ago for older systems, turning on
> dma, setting multsect to 16 and so on.  hdparm now wants a scary
> parameter to turn it on
>
> $ sudo hdparm -m16 /dev/sdb
> /dev/sdb:
>   setting multcount to 16
> Use of -m is VERY DANGEROUS.
> Only the old IDE drivers work correctly with -m with kernels up to at
> least 2.6.29.
> libata drives may fail and get hung if you set this flag.
> Please supply the --yes-i-know-what-i-am-doing flag if you really want
> this.
> Program aborted.
>
> I suppose setting up /etc/hdparm.conf to specifically turn it on for
> all the 2TB drives is the way to go.

That warning has been there for quite a long time. The funny things is, 
I have _NEVER_ seen any corruption arise from it. I suspect it was an 
issue affecting some kernel versions at some point between 2.6.18 and 
2.6.32 when used in combination with some disks/controllers. Since I use 
RHEL, I went from 2.6.18 (RHEL5) to 2.6.32 (RHEL6) and never noticed the 
problem.

But if there is a problem on your specific setup, at least with ZFS 
you'll notice any corruption that may be taking place.

Gordan



More information about the zfs-discuss mailing list