[zfs-discuss] One RAID0 disks for ZFS
ed at ewwhite.net
Wed Apr 18 16:34:03 EDT 2018
Seriously, for old HP equipment (pre-Gen8), I think of ZFS as a volume manager instead of a complete RAID solution.
I don’t go down the path of separating RAID0 volumes because of the metadata and drive replacement issues noted in my ServerFault posts.
Hardware RAID is fine for that era of equipment, and I simply use the HP RAID utilities to carve out the LUNs/LogicalDrives needed for my use.
The resulting block devices are presented to ZFS as single vdevs.
logicaldrive 1 (72.0 GB, RAID 1+0, OK)
logicaldrive 2 (1.5 TB, RAID 1+0, OK)
physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS, 900.1 GB, OK)
physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS, 900.1 GB, OK)
physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS, 900.1 GB, OK)
physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS, 900.1 GB, OK)
physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS, 900.1 GB, OK)
physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS, 900.1 GB, OK)
This represents 6 physical drives slice into logicaldrive 1, holding the operating system
And logicaldrive 2, a block device presented to ZFS:
# zpool status -v
scan: scrub repaired 0B in 0h39m with 0 errors on Thu Mar 22 04:48:56 2018
NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
wwn-0x600508b1001ca94fce283bde2337d226 ONLINE 0 0 0
errors: No known data errors
scan: scrub repaired 0B in 0h18m with 0 errors on Thu Mar 22 04:27:03 2018
NAME STATE READ WRITE CKSUM
vol2 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme0n1 ONLINE 0 0 0
nvme2n1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
nvme1n1 ONLINE 0 0 0
nvme3n1 ONLINE 0 0 0
ed at ewwhite.net
From: zfs-discuss <zfs-discuss-bounces at list.zfsonlinux.org> on behalf of Bond Masuda via zfs-discuss <zfs-discuss at list.zfsonlinux.org>
Reply-To: "zfs-discuss at list.zfsonlinux.org" <zfs-discuss at list.zfsonlinux.org>, Bond Masuda <bond.masuda at jlbond.com>
Date: Wednesday, April 18, 2018 at 2:52 PM
To: "zfs-discuss at list.zfsonlinux.org" <zfs-discuss at list.zfsonlinux.org>
Subject: Re: [zfs-discuss] One RAID0 disks for ZFS
On 04/18/2018 12:01 AM, Makrand via zfs-discuss wrote:
I am setting up zfs on centos on pretty old server HP Proliant DL580. This server has two P400 Raid Controllers each having 8 SAS disks behind them.
These raid controller can't be set in passthrough mode from BIOS. Hence I can't present SAS disks to ZFS as is (as JBOD). I can create raid0 logical drive with one disk each and zfs can detect each disk as is then.
Will it work if I create 8 raid0 drives and then feed them to zfs for mirror creation etc? I am just wondering if that will affect ZFS capability to detect faulty drives down the line. Any other cons/negatives this method might have?
I know you've probably done your own online searches on this topic and the general consensus would be to not use single disk RAID0 and just replace the RAID controller with HBA. And it is true that it adds a little bit of complexity. However, that said, I did this a long time ago when I was first messing around with ZoL and had limited hardware options available to me. It was an old Dell PE2900 with a PERC H700 card (LSI SAS 2108 based RAID controller) with 8x single disk RAID0 and ZFS raidz2 on top of those 8 raid0. It actually worked for many years like that and I did have to replace hard drives twice, both times without much issue although I did have to use the MegaCLI command to re-create the RAID-0 on the replacement disk. smartctl also worked, but required additional command options to get the SMART data through the RAID controller. One other thing I did notice is that performance was pretty bad for some reason, but it was not a requirement so i just left it alone. Also, it might be particular to the hardware i was using (backplane, H700, cables, etc.), but the hard drive ordering 0,1,2,...,7 did not follow what was printed on the outside of the server by the hard drive slots; in fact i think when i mapped it out, it was reversed like 3,2,1,0,7,6,5,4. That caused some confusion when identifying the drive that failed - I resorted to the old trusted method of taking the disk out of the vdev, and then running dd to a file on the pool and watching to see which HDD activity LED wasn't blinking.
Short answer is: if your options are limited, it is a workable solution, but if you can afford and find a regular HBA controller that works in your machine, that is preferred.
I am new to zfsonlinux and just wanted to check before I do it.
zfs-discuss mailing list
zfs-discuss at list.zfsonlinux.org<mailto:zfs-discuss at list.zfsonlinux.org>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the zfs-discuss