[zfs-discuss] One RAID0 disks for ZFS

Makrand makrandsanap at gmail.com
Fri Apr 27 02:16:27 EDT 2018


On Thu, Apr 19, 2018 at 2:04 AM, Edmund White via zfs-discuss <
zfs-discuss at list.zfsonlinux.org> wrote:

> Seriously, for old HP equipment (pre-Gen8), I think of ZFS as a volume
> manager instead of a complete RAID solution.
>
> I don’t go down the path of separating RAID0 volumes because of the
> metadata and drive replacement issues noted in my ServerFault posts.
>
> See: https://serverfault.com/a/545261/13325
>
>
> Hardware RAID is fine for that era of equipment, and I simply use the HP
> RAID utilities to carve out the LUNs/LogicalDrives needed for my use.
>
> The resulting block devices are presented to ZFS as single vdevs.
>
>
>
> For example:
>
>
>
>       logicaldrive 1 (72.0 GB, RAID 1+0, OK)
>
>       logicaldrive 2 (1.5 TB, RAID 1+0, OK)
>
>
>
>       physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS, 900.1 GB, OK)
>
>       physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS, 900.1 GB, OK)
>
>       physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS, 900.1 GB, OK)
>
>       physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS, 900.1 GB, OK)
>
>       physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS, 900.1 GB, OK)
>
>       physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS, 900.1 GB, OK)
>
>
>
> This represents 6 physical drives slice into logicaldrive 1, holding the
> operating system
>
> And logicaldrive 2, a block device presented to ZFS:
>
>
>
> # zpool status -v
>
>   pool: vol1
>
> state: ONLINE
>
>   scan: scrub repaired 0B in 0h39m with 0 errors on Thu Mar 22 04:48:56
> 2018
>
> config:
>
>
>
>       NAME                                      STATE     READ WRITE CKSUM
>
>       vol1                                      ONLINE       0     0     0
>
>         wwn-0x600508b1001ca94fce283bde2337d226  ONLINE       0     0     0
>
>
>
> errors: No known data errors
>
>
>
>   pool: vol2
>
> state: ONLINE
>
>   scan: scrub repaired 0B in 0h18m with 0 errors on Thu Mar 22 04:27:03
> 2018
>
> config:
>
>
>
>       NAME         STATE     READ WRITE CKSUM
>
>       vol2         ONLINE       0     0     0
>
>         mirror-0   ONLINE       0     0     0
>
>           nvme0n1  ONLINE       0     0     0
>
>           nvme2n1  ONLINE       0     0     0
>
>         mirror-1   ONLINE       0     0     0
>
>           nvme1n1  ONLINE       0     0     0
>
>           nvme3n1  ONLINE       0     0     0
>

​One Quick question Ed: if logicaldrive 2 is single block device for Zpool,
how you've made 2 mirror Vdevs from this single device?  Also, your ld is
still some sort of RAID created at hardware level at end of day.​


​@ All,

Thanks a lot for your replies. They have really helped. I've done testing
with zfs pool built over 8 RAID 0 logicaldrives. Luckily one of disk went
bad and zfs sure enough was able to mark it as faulted in  pool


*[root at kvm-store1 /]# zpool status*

*  pool: p-11
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
	repaired.
  scan: resilvered 0B in 0h0m with 0 errors on Thu Apr 26 11:41:06 2018
config:

	NAME        STATE     READ WRITE CKSUM
	p-11        DEGRADED     0     0     0
	  raidz2-0  DEGRADED     0     0     0
	    c0d0    ONLINE       0     0     0
	    c0d1    ONLINE       0     0     0
	    c0d2    FAULTED      0     0     0  too many errors
	    c0d3    ONLINE       0     0     0
	    c0d4    ONLINE       0     0     0
	    c0d5    ONLINE       0     0     0
	    c0d6    ONLINE       0     0     0
	    c0d7    ONLINE       0     0     0*

Note that, HPACUCLI is still showing this as ok LD and PD. You now need to
take faulted this device offline from zpool

*[root at kvm-store1 /]# zpool online l-pool /dev/cciss/c0d2 *

After this delete the LD using HPACUCLI


*==> ctrl slot=11 logicaldrive 3 delete forced*

Check if LD is gone and PD is unavailable


*==> ctrl slot=11 show config*

Now remove the disk manually. Put back in disk of same parameters. Wait for
2-4 mins since it Controller needs some time to recogize this dsik. then run


*==> ctrl slot=11 show config*

Create new LD


*=> ctrl slot=11 create type=ld drives=2I:1:3 raid=0*

Check config again


*==> ctrl slot=11 show config*

Finally mark this device online again in pool.

*[root at kvm-store1 /]# zpool online l-pool /dev/cciss/c0d2 *

Done!!

​
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20180427/58397911/attachment-0001.html>


More information about the zfs-discuss mailing list