[zfs-discuss] One RAID0 disks for ZFS

Gionatan Danti g.danti at assyoma.it
Wed Apr 18 15:36:41 EDT 2018

Il 18-04-2018 11:20 Sam Van den Eynde via zfs-discuss ha scritto:
> Hi,
> Reading
> https://serverfault.com/questions/29349/disabling-raid-feature-on-hp-smart-array-p400
> [2] if you delete all arrays on the P400 (so leave all disks
> unconfigured) and boot the system, it should passthrough the drives
> "automagically". Did you try that?

Reading the other replies, it seems that the accepted answer is wrong 
(at least for the P400).
He can try, but if it don't work, he has the following options:
- use single disk RAID0 arrays;
- use hardware RAID5/6/10 presenting a single vdev to ZFS;
- use multiple 2-way RAID1 mirrors, striping ZFS on top.

With solution n.1 ZFS can somewhat "see" the raw disks; however, a 
single failed disk will bring down the entire 1-disk array. Raid 
controllers can react badly to that (ie: freezing all operations), so he 
had to carefully test what happens on the P400.

Soluzion n.2 is the easiest, but he loses ZFS auto-healing feature 
against bitroot (detection remains, though). That said, entreprise 
controllers should have auto-patrol enabled by default, which minimizes 

To mee, solution n.3 seems the better one: ZFS see multiple striped 
vdevs, leaving it's auto-healing capability intact. Moreover, as single 
array is composed of 2 mirrored disks, losing a single disk should pose 
no problem at all. This solution can be adapted to use 2x RAID5 or RAID6 
arrays, based on the number of disks, expected usable space and 
redundacy target.

Finally, each time it is possible, I really like to use a "dump" HBA 
(even integrated one) to drive my ZFS arrays.

Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8

More information about the zfs-discuss mailing list