Cannot expand my zpool. Need help.

Status
Not open for further replies.

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18
Here is my problem. I created this problem in my test-lab environment in a ESX server. But want to use it later using a SAN array. If someone can help it will be of much help. I read all possible threads with no luck of resolving my issue.


I started with a 100GB disk. ( /dev/da1)
and created zpool - zp1
I expanded my disk to 300GB and want to expand my zpool to 300GB. I can never get that going. Here are the data.

FreeNAS Version 9.2.1.2

Gather Info
zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zp1 97.5G 2.37M 97.5G 0% 1.00x ONLINE -
~# zpool get autoexpand
NAME PROPERTY VALUE SOURCE
zp1 autoexpand on local
~# zpool status
pool: zp1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zp1 ONLINE 0 0 0
gptid/da716bc8-ab2c-11e3-a758-0050569fa059 ONLINE 0 0 0
Expand the size
(Following completed with no errors)
~# zpool online -e zp1 gptid/da716bc8-ab2c-11e3-a758-0050569fa059
Still zpool is showing same size.
~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zp1 97.5G 2.37M 97.5G 0% 1.00x ONLINE -
Other things I tried ;
Reboot,
Delete zpool from gui ; reboot ; and auto import.
GUI show the correct size of the disk under disk info. But zpool do not gets expanded.
None seems to be helping, solve the puzzle. As you can see, my partition size is good. But my zpool do not get expanded to the right size.
Following export/import task ended up with an error (probably I do not know the syntax, but autoimport from GUI after export works.)
[root@freenas] ~# zpool export zp1
[root@freenas] ~# zpool import zp1
cannot mount '/zp1': failed to create mountpoint
cannot mount '/zp1/.system': failed to create mountpoint
cannot mount '/zp1/.system/cores': failed to create mountpoint
cannot mount '/zp1/.system/samba4': failed to create mountpoint
cannot mount '/zp1/.system/syslog': failed to create mountpoint
[root@freenas] ~#
Please help.
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok.. so explain to me how you "expanded" a disk from 100GB to 300GB...
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I gather that the OP expanded a VMDK within VMware


Sent from my phone
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, if that's the case that's outside the expected design for ZFS. There's no real-world equivalent to expanding a vmdk. So I'd consider the question to be mute since you can't really do "in the real world" what you are trying to do in the VM. So "works as expected" is the most appropriate answer.
 

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18




Yes I tried all. Specially the first link shows pretty much my steps. One of the missing command was, import command with -R option (Which I did with GUI auto-import I guess). Now that I tried "export zp1 ; import -R /mnt zp1" ; that did not help either.

Point is FreeBSD (FreeNAS) recognize the disk correctly now. gpart show command and gui - view - disk both show the disk size correctly at 300GB now. Also partition number 2 is correctly showing the size. But zpool is still small. I can delete the pool and re-create at the new size. Which is not what I want. I do not want to loose my data.

Did this functionality work before. I have seen some users had success? Not anymore?
 

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18
Ok.. so explain to me how you "expanded" a disk from 100GB to 300GB...


Using V
Well, if that's the case that's outside the expected design for ZFS. There's no real-world equivalent to expanding a vmdk. So I'd consider the question to be mute since you can't really do "in the real world" what you are trying to do in the VM. So "works as expected" is the most appropriate answer.


Cyberjock,

This do exist in real world when we use SAN, or RAID / Disk arrays. Where we can expand the size of the disk at a later time. Which can be emulated in ESX server for this test as well.

Here is the real world problem for me.

FreeNAS/zfs is very good for performance, compression, encryption etc.

For me zfs is missing a major feature in disk management item, that is to re-stripe when we add a new disk/vdev. ZFS users seems to have lot of storage so, they backup and create a new pool with new disk-pool and move on. But that is lot of pain, and I am home user with smaller number of disks in a budget. zfs is not going to change for a while, although this is a essential feature that every disk management system should have (Even raid cotrollers have them).

To over come this problem there are 2 options that I can think of.

1. Do the disk protection using a Raid controller (I know you guys don't like it, but that performance is ok with me), and grow the array by adding disks. Raid array can re-stripe, and make the presented disk device larger. (Which is what I was trying to simulate in this thread)

2. Add disk, use raid controller to, expand Raid, present a new device every time I add a new disk. This way FreeNas gets to see a new vdev, and I can add it to the zpool and get it expanded. But we have a problem. zpool by default stripe the new vdev with the existing. I do not want that stripped. I want it concatenated, because, I already stripped it in my raid controller. If zfs stripe it again this will reduce my performance. So this lead me to ask you a question, is there any option to turn off stripping of vdevs in freenas, (instead concatenate) ? This will also solve my problem and I think that is the best option for me.

Any thoughts welcome..
 

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18
Let me give you bit more information about this. (Forget about how I expanded. Bottom line, I have a disk with the correctly sized partitions, but zpool won't grow to the size of the partition, but many have suggested that this should have worked.)

Code:
[root@freenas] ~# zpool get autoexpand vol1
NAME  PROPERTY    VALUE  SOURCE
vol1  autoexpand  on      local
[root@freenas] ~# zpool status vol1
  pool: vol1
state: ONLINE
  scan: none requested
config:
 
        NAME                                          STATE    READ WRITE CKSUM
        vol1                                          ONLINE      0    0    0
          gptid/fb33b0d5-ab8d-11e3-aaa4-000c299c2514  ONLINE      0    0    0
 
errors: No known data errors
 
[root@freenas] ~# gpart show da3
=>      34  629145533  da3  GPT  (300G)
        34        94      - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  624951135    2  freebsd-zfs  (298G)
 
[root@freenas] ~# zpool list da3
NAME  SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
vol1  97.5G  636K  97.5G    0%  1.00x  ONLINE  /mnt
 
[root@freenas] ~# zpool online -e vol1 gptid/fb33b0d5-ab8d-11e3-aaa4-000c299c2514
[root@freenas] ~# zpool export vol1
[root@freenas] ~# zpool import -R /mnt vol1
[root@freenas] ~# zpool list
 
[root@freenas] ~# zpool list vol1
NAME  SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
vol1  97.5G  732K  97.5G    0%  1.00x  ONLINE  /mnt


As you can see, zpool never gets to the size to 300GB on what ever commands I do? What is the trick?. What am I missing? There must be a way to do this rather than saying there is no real life scenario.

Are we looking at a bug?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
This do exist in real world when we use SAN, or RAID / Disk arrays. Where we can expand the size of the disk at a later time. Which can be emulated in ESX server for this test as well.

Ok.. so how does a 100GB disk magically become a 300GB disk without any kind of disk replacement. Keep in mind you're going to be labeled if you make any mention of using a RAID controller with ZFS since ZFS is designed to NOT be used with RAID controllers.

For me zfs is missing a major feature in disk management item, that is to re-stripe when we add a new disk/vdev. ZFS users seems to have lot of storage so, they backup and create a new pool with new disk-pool and move on. But that is lot of pain, and I am home user with smaller number of disks in a budget. zfs is not going to change for a while, although this is a essential feature that every disk management system should have (Even raid cotrollers have them).

Sure, it's not the most ideal situation for home users. ZFS was never really expected to be for home users. But, thanks to the miracles of open source we all get to enjoy it if we are willing to pay for it. But, you either accept those risks, accept that things may not be ideal, or find a different product. FreeNAS(and ZFS) isn't for everyone. But I fail to see the point in choosing FreeNAS then complaining about problems that you(and iX) can't solve. If your choice isn't making you happy you can use Windows/Linux/Mac/whatever you normally use. Plenty of people show up here and walk away from FreeNAS and admit that FreeNAS isn't for their situation. Sometimes it's stupid to use a 10 lb sledgehammer to kill an ant.

To over come this problem there are 2 options that I can think of.

1. Do the disk protection using a Raid controller (I know you guys don't like it, but that performance is ok with me), and grow the array by adding disks. Raid array can re-stripe, and make the presented disk device larger. (Which is what I was trying to simulate in this thread)

Yep.. and RAID + ZFS = fail. Sorry, but this is a use case you've created based on ill-conceived ideas. So I'll still argue you're creating the problem yourself by using ZFS outside it's designed expectations. ZFS on RAID is not just for performance reasons. It's like having 2 bosses, each telling you what to do, and expecting you to do 8 hours worth of work for each of them in 8 hours. Just check out the idiot.. user.. that bought a $10k server from some company that put ZFS on a hardware RAID and couldn't believe the corruption that occurred. You've done a great job of gutting self-healing with ZFS by doing a RAID. So congrats to you.. you win a cookie.

2. Add disk, use raid controller to, expand Raid, present a new device every time I add a new disk. This way FreeNas gets to see a new vdev, and I can add it to the zpool and get it expanded. But we have a problem. zpool by default stripe the new vdev with the existing. I do not want that stripped. I want it concatenated, because, I already stripped it in my raid controller. If zfs stripe it again this will reduce my performance. So this lead me to ask you a question, is there any option to turn off stripping in freenas, (instead concatenate) ? This will also solve my problem and I think that is the best option for me.

I won't say it again.. RAID + ZFS = fail.

Frankly, if I were just a little less happy today I'd lock this thread right now because there is absolutely nothing to be gained by continuing a discussion of any kind with someone that has chosen to use ZFS on RAID. PERIOD. It's ignorance to the extreme and sets a horrible horrible example.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ok.. so how does a 100GB disk magically become a 300GB disk without any kind of disk replacement.

you hit it with a sledgehammer until it's not only seeing double, but actually seeing triple.

honestly, what a n00b.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I had a stacker compression card back in the early 90s! That sucker was 16-bit ISA!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's compatible with PCI-e. I promise. Just use a Dremel to make it fit.
 

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18
I won't say it again.. RAID + ZFS = fail.


Thank you for the replies from all. (Bit rude to a noob trying to use FreeNas at home, but ok. So my understanding now is zpool expansion don't work, raid+zfs have issues.).

You experts should have lot of failures behind zfs+raid, so I admit your expert opinion. As I said, I can take a Performance hit, but I cannot take data loss. So if you are talking about data loss I am going to run away from using that combination. Can you clarify if I use Raid + zfs I am expecting a data loss as well?

This all come down to my last resort ;
Is there any option to use Concatenate VDEVs instead of zfs deciding to Stripe them. I haven't seen any option. But I am trying to see if that can be done. Can you help answer that?

If that does not exist either, how you expand a zpool, by adding disks? are you going to tell me backup and restore? What do you guys do for this basic requirement? I want start with a 2+1 raidz and want to keep growing, into 3+1, 4+1 etc when I need? Backup restore is not possible? Like to know your opinion? (I heard you saying FreeNas is not for me? Is that the answer?)
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
My mistake I didn't notice you had VM setup..

http://doc.freenas.org/index.php/Hardware_Recommendations
NOTE: instead of mixing ZFS RAID with hardware RAID, it is recommended that you place your hardware RAID controller in JBOD mode and let ZFS handle the RAID. According to Wikipedia: "ZFS can not fully protect the user's data when using a hardware RAID controller, as it is not able to perform the automatic self-healing unless it controls the redundancy of the disks and data. ZFS prefers direct, exclusive access to the disks, with nothing in between that interferes. If the user insists on using hardware-level RAID, the controller should be configured as JBOD mode (i.e. turn off RAID-functionality) for ZFS to be able to guarantee data integrity. Note that hardware RAID configured as JBOD may still detach disks that do not respond in time; and as such may require TLER/CCTL/ERC-enabled disks to prevent drive dropouts. These limitations do not apply when using a non-RAID controller, which is the preferred method of supplying disks to ZFS."
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
RAID+ZFS can and has caused dataloss. I'm really not sure why we have to spell it out to such an extent when the manual makes it abundantly clear that you should NOT run ZFS on RAID. If you start doing any kind of searching of the internet, even non-FreeNAS sites, you'll find that everyone says the same thing.. ZFS + RAID = "admin is an idiot".

In the past I've even gone so far as to tell people that admitted to doing ZFS on RAID that if I was their employer and they did that with data at my company I'd fire them on the spot. There's a point at which you should be able to know what completely idiotic things you shouldn't do. Things that are so basic and elementary such as "water + electronics = bad" don't need to be said. ZFS + RAID isn't much different. It is said in most every site that discusses ZFS that you shouldn't do ZFS on RAID, and if my employee couldn't figure that out from the TONS and TONS of sites out there on ZFS, then I don't want that person working for me. I expect them to be knowledgeable and to do at least some basic homework even if I don't expect them to be an expert. But the abundance of websites that say RAID + ZFS = fail makes it more of a sign of ignorance, stupidity, or lack of G.A.F factor that regardless of which of the categories that individual falls under, they clearly aren't setting their bar very high.

You can't grow the pool as you want. ZFS doesn't work that way. If you'd take the time to read my noobie guide(link in my sig) you'll find tons of "dos and don'ts" for ZFS. I wrote that guide so I wouldn't have to answer the questions you are asking. I was hoping people would do their homework and stumble on one of my 9000 posts here and actually see that big fat red line of text called my signature and actually read it. But alas, too many just don't care as they are simply searching for the free lunch. Not saying you want a free lunch since I don't have an opinion yet.

FreeNAS and ZFS isn't for everyone. You'll have to do your own research to come to that conclusion. Just like you couldn't tell me if a Chevy or a Ford is a better vehicle for me, there's no reason for me to think I can really give you an answer with any kind of informed opinion.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And yes, I get rude sometimes. Should I be nice and spend my day answering the same questions over and over when I answer the questions without being asked in a guide that I've spent MANY hours developing, for free, for people like you that are new to FreeNAS?
 

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18
My mistake I didn't notice you had VM setup..
Thank you..
Yes I am running on a VM. My home esxi setup do not support PCI pass-through. I need to run my file server (FreeNAS) inside the esx, as I have other uses for the server, so I cannot dedicate this box to FreeNAS. So I have to find a way to present disks to the FreeNAS server through esx. Without a raid-card, ESX will not perform either. That is why I am seeking help from forum.
 

Vijay

Dabbler
Joined
Mar 5, 2014
Messages
18
And yes, I get rude sometimes.


Now be nice... I decided to open the thread after looking for info, and when I couldn't find.

Don't bark at me on this time. But lets get back to the original question on my thread, I like to see a closure for the benefit of me and the others who read it.

I read lot of threads and googling on zfs before I posted by question. And I was convinced it is not working for me but it does work for others. Here is a threads that say it is working for him,
http://forums.freenas.org/index.php?threads/expanding-zfs-pool-vmware.16202/#post-85432

So I thought, it should work for me too. Unless what I am hitting is a bug it should work for me too. That is why I started the thread. I know this topic may be beaten to death. But We need to close it.
Again I respect everyone's opinion on zfs+raid shouldn't be used. Which I understand now. However I like to see some closure to my original question on the thread. Can Cyberjock give 2 firm answers and close this discussion? (Yes / No Answers)

Q1: zpool expansion on a expanded disk don't work, and it is by design? Is that correct?
Q2: No way to concatenate vdev devices, it only does stripe? Is that correct?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You're not likely to get it. It is a recipe for disaster and tears. You are not the first person to come through here who presumes to believe that there must be a way to do what you want with your arbitrary hardware.

The Top Gear guys made a Space Shuttle out of a Reliant Robin. Unfortunately, that product failed to reach orbit. Just because you call something a Space Shuttle does not make it so.

I have watched carefully and done much experimentation with ESXi and virtualization in an effort to accomplish the sort of thing that you appear to want to do, though for somewhat different purposes. It is a little tricky to make it work, even with the right hardware. I've documented how to make it work with the right hardware. I own no stock in Supermicro or any other server manufacturer. I have no vested interest in telling you to do this with appropriate hardware, other than my desire to see you not lose your pool.
 
Status
Not open for further replies.
Top