FreeNas Not Using All Available Space

Status
Not open for further replies.
Joined
Oct 23, 2015
Messages
8
Hi Guys,
I feel as though I'm missing something simple. I have 2x3tb drives in a FreeNas 9.2 box. The size of the drive is recognized as 2.7tb GPT, but the iSCSI connection in Windows only see's it as 2tb. What things should I check to find out what's wrong? Thanks everyone.
 

Attachments

  • freenas space.JPG
    freenas space.JPG
    33.7 KB · Views: 267
  • freenas gpart show.JPG
    freenas gpart show.JPG
    47.6 KB · Views: 266

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Perhaps you only configured a 2TB virtual disk?

In any case, you really can't allocate much more space without fragmentation becoming a major issue. A ZFS pool used for VM storage will become increasingly fragmented with time, and 60% is about as full as it should be made unless you are particularly masochistic and like slow---block---allocation.
 
Joined
Oct 23, 2015
Messages
8
I've included pics. doesn't the 770gb available mean that the volume was configured for the entire disk?

This drive is connected to a Windows machine for backup storage. Is there a way to avoid fragmentation?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, fragmentation is an inherent side effect of a copy on write filesystem...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
... maybe go into the iSCSI configuration area and take a look? Sharing->Block->Extents->${yourextent}->Device and see how big it thinks it is.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Try actually doing what I said, click Sharing, then Block, then Extents, not "View Extents", and then look in the Device box. It'll say something like "poolname/zvolname (size GiB)".
 
Joined
Oct 23, 2015
Messages
8
Under sharing, there is no 'block' category. I can only assume the gui is a bit different in 9.2.
cxw0gUY.png


I appreciate your help, can you please cut down on the attitude?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Under sharing, there is no 'block' category.

Crap. I thought that was introduced a long time ago.

I appreciate your help, can you please cut down on the attitude?

Countersuggestion: don't mistake terse for attitude. I actually gave you a recipe that had been checked to do what was needed, because the information is NOT available on the "View Extents" screen, as nonsensical as that may seem. Then I asked you to actually do what I originally asked.

I don't have any 9.2.1.7 laying around. I'll spin up a VM and take a look but it may take a bit.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Kay, so yes, I screwed up, back in 9.2.1.7 it's still sitting over in Services -> iSCSI -> Extents -> ${yourext} -> Device ... but not under View Extents.

Wait. Are you using a file backed extent? Because that's the only time I see a reference to 0 for size being allowed. In which case, the answer becomes, "please look at the size of the file you're using for backing store," I think. If you do that, then there's also the possibility that there's maybe some 32-bit int issue in there which is limiting the size to 2TB, and also the inevitability that there's no fix for this because zvols are the way you're expected to implement iSCSI.

And at this point I'll also reiterate that the fragmentation issue is a significant limiting factor in your options here. At 2TB, you've already created a device that's larger than recommended for your pool size.

However, since you're using 9.2.1.*, I'll note that there is a possibility that you COULD have an option to use your two 3TB drives as iSCSI targets; you'd have no redundancy and it wouldn't be using ZFS, and there'd be no possibility of future FreeNAS upgrades, but you'd have 6TB of space.
 
Joined
Oct 23, 2015
Messages
8
I upgraded to 9.3 and now increasing the extend size is immediately reflected in Windows. Looks like it was a bug. Thanks for your help, jgreco. I'm going to take your advice and avoid extending the size anyway. At least we got to the bottom of the issue.
 
Status
Not open for further replies.
Top