[zfs-discuss] Re: xbmc using zfs on linux - help needed

Corin corin.thorpe at gmail.com
Sun Jun 17 04:08:02 EDT 2012


That's precisely my usage case. Domestic media server and backups of
documents and pictures. It can easily provide 70-80MB/sec via a smb network
share, with no tweaking at all. For my needs that's more than sufficient. I
suppose your best route forward is to proceed with 4x2TB and do some
performance testing. If it's no good, time to consider rebuilding with a
5th drive and tuning ZFS more.
On Jun 17, 2012 8:48 AM, "James Bagshawe" <jamesbagshawe at gmail.com> wrote:

> How much performance loss would I suffer if I stuck with the 4x2TB
> setup I was planning? That has the advantage of maximum chassis
> compatibility and stability, but I don't want to cripple my system's
> effectiveness if the performance drop-off from a 4 drive raid pool
> would be significant.
>
> I could go to five if I bought another drive and ran the OS from USB.
> Anything beyond that and it gets complicated quickly. I plan to leave
> the server on 24/7. The downside would be possible stability issues
> and the cost of another 2TB drive.
>
> The other option would be to slim down to a 3 drive raid pool, but I
> quite fancy the extra space. The usage would be 90% media server data,
> 10% back-up storage of data, not often updated.
>
> On Jun 17, 1:10 am, Christ Schlacta <aarc... at aarcane.org> wrote:
> > You can use an SSD for your root fs if you want, but it won't matter
> > much.  You'll notice slightly faster boot and initial load times, but
> > for the most part the system will run from memory once it's up.
> >
> > On 6/16/2012 15:34, Richard Laager wrote:> If you're going to separate
> the OS and the data, you have basically two
> > > options:
> >
> > > A) Use two pools. Setup the OS, ignoring the 4x2TB drives completely.
> > > Once the system is fully installed and booted normally, just do a
> `zpool
> > > create ... tank raidz /dev/disk/by-id/...` for your data pool.
> >
> > > B) Use something other than ZFS (e.g. ext4 on MD software RAID-1) for
> > > the root filesystem. This means you lose the features of ZFS, but it
> > > also makes it easier to recover if your ZFS installation (i.e. the
> > > kernel modules) has problems.
> >
> > ZFS root is still a bit problematic, but it's easy enough to tar up /
> > into a file on ZFS for backup purposes.
> >
> > Unless you need uptime and can't take a day to rebuild, using raid for
> > the root volume of a media system seems quite wasteful.  Keep a good
> > backup, and you can redeploy in about 2 hours (15 minutes building ZFS,
> > 1:30 untarring your backup, and 15 more minutes mucking around with
> > grub, and a few extra minutes partitioning the new drive)
> >
> > > Also, if possible (with your budget and chassis), 5 drives would be
> > > better for a raidz1, as that leaves you 4 data drives. This is a best
> > > practice. Your 4x2TB scenario will still work, but it's not optimal.
> >
> > I second this.  try to stick to power of 2 + N for raidzN systems.  ZFS
> > is much happier that way, since all the data blocks et al use powers of
> 2.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > Finally, keep in mind that RAIDZ has the IOPS of a single drive, so if
> > > performance is critical, use mirroring instead.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20120617/09bad251/attachment.html>


More information about the zfs-discuss mailing list