[zfs-discuss] Re: 6x drive raidz1 Ubuntu 12.04 performance problems
gordan.bobic at gmail.com
Wed Jun 6 18:48:26 EDT 2012
On 06/06/2012 22:47, Christ Schlacta wrote:
> On 6/6/2012 13:03, Gordan Bobic wrote:
>> I would have thought that you have noticed by now that the general
>> consensus is that there are no reliable disks any more. The only
>> reason I use Seagate and not WD and Samsung is because Seagates:
>> 1) Support Write-Read-Verify feature
>> 2) Don't lie to you in SMART
>> Hitachis seem to be quite reliable - at least compared to the
>> competition. No WRV feature, though. I cannot confirm whether they lie
>> about their reallocated sectors right now - the only two I have handy
>> are reading 0 reallocations at 12K and 21K power-on hours, 1TB models.
>> Whether that is because they lie or because they really are that good
>> - I don't know.
> I have 6 Hitachi's in an array. two report 1 reallocated sector. I keep
> meaning to RMA, but I'm too lazy and planning an upgrade soon anyway.
LOL! 1 reallocated sector isn't worth RMA-ing, and it's not a "failure",
so the chances are they'd send you back the disk and charge you for the
shipping. It's only a failure if it runs out of reallocatable sectors
(usually several hundred), or something else goes critically bad in
SMART (or it just bricks).
>> If there are good, non-bottlenecking, fully supported (drivers in
>> mainline Linux) SATA controllers, I never had one, apart from the ones
>> built into the MoBo south-bridge (have had a very trouble-free
>> experience with both Intel ICH9 and AMD SB600 and have never managed
>> to saturate either with mechanical disks - both easily manage over
>> Silicon Image (SIL 3124/3132) are OK in as far as they work and they
>> are very well supported and have NCQ, but neither can handle anywhere
>> near the throughput of the disks attached to them. 3132 (2-SATA ports)
>> is PCIe and tops out at about 170MB/s. 3124 is PCI and that
>> bottlenecks it even lower (and it has 4 SATA ports).
>> I have a Marvell based 4-port SATA PCIe card in one of my machines but
>> for the life of me I cannot recall the model number. It manages about
>> 300MB/s IIRC, so still easy to saturate on linear transfers with a
>> full complement of mechanical disks. Not really had any problems with
>> it and I only run 3 disks off it so in my specific setup it isn't a
>> The LSI 8-port SAS card I have is generally OK, but disks start
>> falling off the bus (and thus falling out of the zpool) within seconds
>> of running SMART self-tests on them. For this reason alone I wouldn't
>> buy another one.
> I'd be interested to know whether this is a specific to the card model
> you have, or if it affects older and newer cards as well. I have an LSI
> 1068E based controller, and I never run smart tests, but I check the
> smart status religiously.
Yup, LSI 1068E is the one I've got problems with.
Fore off "smartctl --test=long" on a disk hanging off that controller
and run iozone on that disk at the same time. You should see it drop out
More information about the zfs-discuss