[zfs-discuss] One RAID0 disks for ZFS

Bond Masuda bond.masuda at jlbond.com
Wed Apr 18 15:52:27 EDT 2018



On 04/18/2018 12:01 AM, Makrand via zfs-discuss wrote:
> Hello,
>
> I am setting up zfs on centos on pretty old server HP Proliant DL580. 
> This server has two P400 Raid Controllers each having 8 SAS disks 
> behind them.
>
> These raid controller can't be set in passthrough mode from BIOS. 
> Hence I can't present SAS disks to ZFS as is (as JBOD). I can create 
> raid0 logical drive with one disk each and zfs can detect each disk as 
> is then.
>
> Will it work if I create 8 raid0 drives and then feed them to zfs for 
> mirror creation etc? I am just wondering if that will affect ZFS 
> capability to detect faulty drives down the line. Any other 
> cons/negatives this method might have?
>
I know you've probably done your own online searches on this topic and 
the general consensus would be to not use single disk RAID0 and just 
replace the RAID controller with HBA. And it is true that it adds a 
little bit of complexity. However, that said, I did this a long time ago 
when I was first messing around with ZoL and had limited hardware 
options available to me. It was an old Dell PE2900 with a PERC H700 card 
(LSI SAS 2108 based RAID controller) with 8x single disk RAID0 and ZFS 
raidz2 on top of those 8 raid0. It actually worked for many years like 
that and I did have to replace hard drives twice, both times without 
much issue although I did have to use the MegaCLI command to re-create 
the RAID-0 on the replacement disk. smartctl also worked, but required 
additional command options to get the SMART data through the RAID 
controller. One other thing I did notice is that performance was pretty 
bad for some reason, but it was not a requirement so i just left it 
alone. Also, it might be particular to the hardware i was using 
(backplane, H700, cables, etc.), but the hard drive ordering 0,1,2,...,7 
did not follow what was printed on the outside of the server by the hard 
drive slots; in fact i think when i mapped it out, it was reversed like 
3,2,1,0,7,6,5,4. That caused some confusion when identifying the drive 
that failed - I resorted to the old trusted method of taking the disk 
out of the vdev, and then running dd to a file on the pool and watching 
to see which HDD activity LED wasn't blinking.

Short answer is: if your options are limited, it is a workable solution, 
but if you can afford and find a regular HBA controller that works in 
your machine, that is preferred.

> I am new to zfsonlinux and just wanted to check before I do it.
>
>
> --
> Makrand
>
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at list.zfsonlinux.org
> http://list.zfsonlinux.org/cgi-bin/mailman/listinfo/zfs-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20180418/5f44b794/attachment.html>


More information about the zfs-discuss mailing list