[zfs-discuss] Re: Question regarding resizeing/expanding a zpool

Ben Hodgens aequitas at gmail.com
Sat Apr 14 21:41:42 EDT 2012


I understand a little better now. I'm not sure what you're trying to do
will actually work because its an outside case: zfs has always been
intended to be used without hardware raid.

In theory you may be able to use parted to extend the gpt but I'm unsure if
it will break a physical vdev - as zfs sees it at least. Realize that zfs
with hw raid loses you the best features of zfs: there isn't much, IMO,
which would drive me to do what you're doing. Linux mdraid would be
preferable in many ways.
On Apr 14, 2012 4:26 PM, "peram" <pamunthe at gmail.com> wrote:

>
>
> On Apr 14, 10:15 pm, Ben Hodgens <aequi... at gmail.com> wrote:
> > It really depends on the controller capabilities. As for zfs, if you have
> > it running on top of a single zvol and the raid card handles your
> > redundancy, you can manually grow to available device storage. Read the
> man
> > page. ;)
> >
> > Hopefully I understood your question.
>
> We┬┤ll just have to make sure then :-)
>
> This is my setup.  The RocketRaid supports online migration/expansion
> of the RAID and is setup like this right now :
>
> ighPoint CLI>query devices
> ID      Capacity    MaxFree     Flag    Status    ModelNumber
>
> -------------------------------------------------------------------------------
> 1/1     2000.31     0           RAID    NORMAL    ST32000542AS
> 1/2     2000.31     0           RAID    NORMAL    ST32000542AS
> 1/3     2000.31     0           RAID    NORMAL    ST32000542AS
> 1/4     2000.31     0           RAID    NORMAL    ST32000542AS
> 1/5     2000.31     0           RAID    NORMAL    ST32000542AS
> 1/6     2000.31     0           RAID    NORMAL    ST32000542AS
> 1/7     2000.31     2000.31     SINGLE  SPARE     ST32000542AS
> 1/8     2000.31     0           RAID    NORMAL    ST32000542AS
> 1/9     2000.31     0           RAID    NORMAL    ST32000542AS
> 1/10    2000.31     0           RAID    NORMAL    ST32000542AS
> 1/11    160.04      0           SINGLE  LEGACY    Maxtor 6Y160M0
> 1/12    500.11      0           SINGLE  LEGACY    SAMSUNG HD502HJ
>
> -------------------------------------------------------------------------------
> HighPoint CLI>
>
> As you can see, 9 disks of 2 TB each in a raid6 setup that should
> yield 14 TB of storage.  If I check what ubuntu thinks I have I get
> this :
>
> peram at ubuntu:/$ sudo fdisk -l /dev/sdc
>
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util
> fdisk doesn't support GPT. Use GNU Parted.
>
>
> Disk /dev/sdc: 14002.2 GB, 14002197364736 bytes
> 256 heads, 63 sectors/track, 1695687 cylinders
> Units = cylinders of 16128 * 512 = 8257536 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x00000000
>
>   Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1               1      266306  2147483647+  ee  GPT
>
> The status of the spool is this :
>
> peram at ubuntu:/$ sudo zpool status NAS
>  pool: NAS
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
>        corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore
> the
>        entire pool from backup.
>   see: http://zfsonlinux.org/msg/ZFS-8000-8A
>  scan: scrub repaired 5.50K in 9h58m with 1 errors on Sat Apr 14
> 04:54:42 2012
> config:
>
>        NAME        STATE     READ WRITE CKSUM
>        NAS         ONLINE       0     0     1
>          sdc1      ONLINE       0     0     3
>
> errors: 1 data errors, use '-v' for a list
>
> And zpool list reports this :
>
> peram at ubuntu:/$ sudo zpool list
> NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> NAS   10.9T  9.61T  1.27T    88%  1.00x  ONLINE  -
>
> The autoexpand property is set to on for the pool :
>
> peram at ubuntu:/$ sudo zpool get all NAS
> NAME  PROPERTY       VALUE       SOURCE
> NAS   size           10.9T       -
> NAS   capacity       88%         -
> NAS   altroot        -           default
> NAS   health         ONLINE      -
> NAS   guid           8615982165166504382  default
> NAS   version        28          default
> NAS   bootfs         -           default
> NAS   delegation     on          default
> NAS   autoreplace    off         default
> NAS   cachefile      -           default
> NAS   failmode       wait        default
> NAS   listsnapshots  off         default
> NAS   autoexpand     on          local
> NAS   dedupditto     0           default
> NAS   dedupratio     1.00x       -
> NAS   free           1.27T       -
> NAS   allocated      9.61T       -
> NAS   readonly       off         -
> NAS   ashift         0           default
>
> I've tried to export and then import the pool again, but the allocated
> size remains the same.
>
> So to sum it up, my disk device is bigger than what I have available
> in my zpool.  What have I forgotten to do ?
>
> regs,
>
> peram
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://list.zfsonlinux.org/pipermail/zfs-discuss/attachments/20120414/d857d2fb/attachment.html>


More information about the zfs-discuss mailing list