[zfs-discuss] Expanding pool
roedie at roedie.nl
Thu Oct 31 15:12:25 EDT 2013
On 31.10.2013 19:04, Schlacta, Christ wrote:
> On Oct 31, 2013 9:45 AM, "Gordan Bobic" <gordan.bobic at gmail.com>
> > On Thu, Oct 31, 2013 at 4:11 PM, Schlacta, Christ
> <aarcane at aarcane.org> wrote:
> >> If you can insert the new drives without pulling the old, you can
> replace them one at a time while only needing to visit... 10/n times
> where n is the number of spare five bays you have available.
> >> 4Tb drives are indeed scary. Don't run at this capacity/level of
> redundancy any longer than strictly required.
> > Not to mention that as they start to get full, resilvering time
> even with large files is going to be over a week. With small files it
> may be nearer a month. 4TB drives are a silly idea for most use-cases.
> If you need that much data density, switch to 2.5" disks and use a 4:1
> or 6:1 5.25" tray. That's what I am now doing as and when my 3.5" 1TB
> disks fail.
> 2.5" drives are still insanely expensive compared to their 3.5"
> counterparts. Furthermore, those 4-in-1 and 6-in-1 adapters are a
> nightmare and lead to calling woes. There isn't even a viable solution
> that utilizes a single 8087 connector. The only one such product ever
> in production got no good reviews.
> Adding insult to injury, the trays cost as much as the drives do, and
> most 2.5" adapters I've used don't even have activity or status leds.
> I like the solution, in theory, as 2.5s use less power, have lower
> seek times, and are generally more reliable. We're just not there yet.
Are there big 19" chassis where you can cram a lot of those 5.25" in?
With dualpathing and stuff?
To unsubscribe from this group and stop receiving emails from it, send an email to zfs-discuss+unsubscribe at zfsonlinux.org.
More information about the zfs-discuss