[zfs-discuss] ZFS NAS build: hardware recommendations.
gordan.bobic at gmail.com
Wed May 29 03:28:31 EDT 2013
On 05/29/2013 06:13 AM, Schlacta, Christ wrote:
> My advice is quit worrying so Damn much about sata3 and pick an amd
> Nearly every amd motherboard and processor supports ecc ram. (Glad to
> see ecc ram on your list of requirements! )
I have also woved never again to get a machine without ECC RAM if at all
possible. Troubleshooting marginal RAM is a terrible way to waste weeks
of your life.
> As for sata3, the only benefit is for ssds which can saturate the bus.
> Spinning rust cannot saturate the bus. Furthermore, you'll likely end up
> upgrading to an LSI sas controller at some point, which means you *only*
> use the onboard chip for booting.
That's a bit of a stretch. I use LSI cards myself, but only because the
alternatives are several times more expensive. LSI cards are cheap in
comparison - but you get what you pay for.
> What capacity do you need? I'd say look for WD black or red drives. I've
> had good luck with the blacks so far and have heard nothing bad about
> the reds. Failing that, quality is a bit of a crapshoot from model to
> model and run to run.
I find Seagate's warranty support to be the most hassle free. For most
other manufacturers even finding the warranty information on their
website is a challenge. Disks fail, often within warranty - you don't
want the hassle of spending hours getting an RMA arranged.
Seagate, Toshiba and Hitachi disks are also, in my experience, the least
untrustworthy when it comes to their on-board SMART data. WD and Samsung
are the worst.
Sadly, since Seagate bought Samsung and WD bouth Hitachi, all bets are
> Let the warranty guide you, and burn in your
> drives with badblocks before you go live same as you use memtest86+ to
> burn in your ram.
Badblocks only does sequential testing, not much seeking. It doesn't put
much stress on the drive. You might as well run the on-board SMART
self-tests. Something that will test the drive for a few days under a
random read-write load would be better. You might have to write a tool
that does this, though - benchmarking tools are good for load
generation, but AFAK they don't actually do data error checking in
read-write mode. Make sure each of your test batches is bigger than the
disk's on-board cache.
memtest86+ is worse than useless. I will only find RAM that is
physically permanently damaged. It will not find issue with
marginal/unstable RAM. If you can stomach putting Windows on the machine
temporarily, OCCT is very good at finding stability issues (large data
sets for RAM testing). Failing that, mount a large tmpfs (most of the
RAM), dd some large files from /dev/urandom, and run as many loops as
you have logical cores comparing checksums for 24 hours or so. MD5 for
RAM testing (MCH heavy), SHA512 for CPU testing (CPU heavy).
> One last thing! If you can, find an Intel nic either on board or pci.
> They're worth it!
More information about the zfs-discuss