Cannot expand my zpool. Need help.

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
A1) The ZFS developers never envisioned a hard drive that would grow.

A2) ZFS neither concatenates nor stripes vdevs, it uses algorithms to decide where the next blob of data should go.
 

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18
RAID+ZFS can and has caused dataloss. I'm really not sure why we have to spell it out to such an extent when the manual makes it abundantly clear that you should NOT run ZFS on RAID. If you start doing any kind of searching of the internet, even non-FreeNAS sites, you'll find that everyone says the same thing.. ZFS + RAID = "admin is an idiot".
Cyberjock,
I have to respond to this, sorry.

You are barking at the wrong tree. You are deviating from my original issue. Read the heading of the thread. Nothing about raid. So why are you annoyed at me trying to give you a use case. There are other cases, where this can be done similarly Example, in a SAN Frame where you can grow a LUN from 100GB to 300GB. I wanted to know if FreenNAS pool can grow when that is done. When freenas (zfs) cannot do it you seems to have found something else and complaing that I am repeating samething again. Answer should have been zfs is not designed to do that. I told why I asked the question. It worked for someone else, I thought it could be a bug. That is why I started the thread.

Thanks anyway. But I thought this needs an explanation and my forum post was legitimate, and I still don't think I deserve this type of treatment from you.
 

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18
A1) The ZFS developers never envisioned a hard drive that would grow.

A2) ZFS neither concatenates nor stripes vdevs, it uses algorithms to decide where the next blob of data should go.


A2 => Isn't it true that Algorithm will try to balance the balance all the available vdevs, so that at the end it behaves like a stripe, after may writes? I have done, my own testing with adding a new vdev to a existing vdev. (Before adding both vdevs were empty). After that all my rw-io's went went equally to both disks. This made me believe, algorithm tends to create a stripe (balaced-io) at the end.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You are barking at the wrong tree. You are deviating from my original issue. Read the heading of the thread. Nothing about raid. So why are you annoyed at me trying to give you a use case. There are other cases, where this can be done similarly Example, in a SAN Frame where you can grow a LUN from 100GB to 300GB. I wanted to know if FreenNAS pool can grow when that is done. When freenas (zfs) cannot do it you seems to have found something else and complaing that I am repeating samething again. Answer should have been zfs is not designed to do that. I told why I asked the question. It worked for someone else, I thought it could be a bug. That is why I started the thread.

Cyberjock has taken a long story and distilled it down to the root cause. You are doing something that is outside the design parameters. ZFS and FreeNAS do not do what you want, at least not automatically. The only people who run into problems like this are people who disregard ZFS design guidelines and attempt to do things like what you're doing. Cyberjock is perhaps a bit more intolerant of that than I am, but fundamentally I agree with him. ZFS is designed to be your RAID controller. It is not intended to run on top of a lower level RAID controller. It is not intended to be used on a SAN.

You can certainly expand your disk, but to do it you may have to do some manual work. No, I am not going to help you do it.

A2 => Isn't it true that Algorithm will try to balance the balance all the available vdevs, so that at the end it behaves like a stripe, after may writes? I have done, my own testing with adding a new vdev to a existing vdev. (Before adding both vdevs were empty). After that all my rw-io's went went equally to both disks. This made me believe, algorithm tends to create a stripe (balaced-io) at the end.

That bears almost no resemblance to your original question:

Q2: No way to concatenate vdev devices, it only does stripe? Is that correct?

The answer to your question is INCORRECT. In a pessimistic situation, where one vdev is 100% full and the other is 0% full, vdev writes are "concatenated". In an ideal situation, where both vdevs are 0% full, writes are "striped". In the real world, where no pool meets those conditions except under rare circumstances, the algorithm actually ends up doing something intelligent. Given random usage patterns, the load may eventually appear to be roughly balanced. However, striping has a very specific meaning in the storage world, as does concatenation, and ZFS does not actually do either one.
 

eraser

Contributor
Joined
Jan 4, 2013
Messages
147
One way you could do this is to create a second ZFS pool using a new larger disk (vmdk). Then use "zfs send" to send the data from your smaller pool and use "zfs receive" to accept data into your new larger pool.

This post has some zfs send/receive examples: http://forums.freenas.org/index.php?threads/9-2-non-native-block-size-error.16883/page-2#post-90222

After that is done you can destroy your original (smaller) pool.

Since you are running VMware you can take a vmware snapshot first in case something goes wrong. (provided you have enough free space).

But ideally you would have more than one disk (vmdk) per vdev in a ZFS pool. That makes it easier to increase the size of a pool by replacing one disk in a vdev at a time.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And that does correctly reflect with a real-world scenario. But a 100GB disk that is suddenly a 300GB disk, that just doesn't happen in the real world without intervention. I mean, if it does happen just like that in the real world how come my 2TB disks haven't automagically become 10TB disks.. everyone could use some extra disk space. ;)

Edit: You could even simply make a second disk a mirror, let it resilver, then break the mirror by removing the smaller disk. ;)
 
Status
Not open for further replies.
Top