Total usable space

Status
Not open for further replies.

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Out of curiosity, what is the model number of the hard drives?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
I also have eight WD60EFRXs..., but my normal output looks different since I have imposed
quota=27TB
and
refquota=30TB

I have imposed them, as I did not want to have any false expectations that I can use 99.99% of raw storage. Without them I am missing over 2TB too. See the code below for details
Code:
[root@freenas /]# zpool list mypool_8x6TB
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool_8x6TB  43.5T  20.2T  23.3T         -      -    46%  1.00x  ONLINE  /mnt
[root@freenas /]#
[root@freenas /]# zfs list -t snapshot | fgrep -v freenas-boot/
NAME  USED  AVAIL  REFER  MOUNTPOINT
[root@freenas /]#
[root@freenas /]#
[root@freenas /]# zfs list mypool_8x6TB
NAME           USED  AVAIL  REFER  MOUNTPOINT
mypool_8x6TB  14.4T  12.6T  14.4T  /mnt/mypool_8x6TB
[root@freenas /]#
[root@freenas /]# zfs get all mypool_8x6TB | egrep 'T |M '
mypool_8x6TB  used                  14.4T                  -
mypool_8x6TB  available             12.6T                  -
mypool_8x6TB  referenced            14.4T                  -
mypool_8x6TB  quota                 30T                    local
mypool_8x6TB  refquota              27T                    local
mypool_8x6TB  usedbydataset         14.4T                  -
mypool_8x6TB  usedbychildren        42.1M                  -
mypool_8x6TB  written               14.4T                  -
mypool_8x6TB  logicalused           14.3T                  -
mypool_8x6TB  logicalreferenced     14.3T                  -
[root@freenas /]#
[root@freenas /]# zfs set refquota=none mypool_8x6TB
[root@freenas /]# zfs set quota=none mypool_8x6TB
[root@freenas /]# zfs get all mypool_8x6TB | egrep 'T |M '
mypool_8x6TB  used                  14.4T                  -
mypool_8x6TB  available             15.6T                  -
mypool_8x6TB  referenced            14.4T                  -
mypool_8x6TB  usedbydataset         14.4T                  -
mypool_8x6TB  usedbychildren        42.1M                  -
mypool_8x6TB  written               14.4T                  -
mypool_8x6TB  logicalused           14.3T                  -
mypool_8x6TB  logicalreferenced     14.3T                  -
[root@freenas /]# zfs list mypool_8x6TB
NAME           USED  AVAIL  REFER  MOUNTPOINT
mypool_8x6TB  14.4T  15.6T  14.4T  /mnt/mypool_8x6TB
Most users of this storage just look whether the space utilization is below 100% or not (they do that everywhere), and makes no sense for FreeNAS to be an exception to the rule.
 
Last edited:

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
Without them I am missing over 2TB too.
Do you have a logical explanation for that? I don't see calculation errors in the online calculator, even if I calculate the size "by hand" I get a similar size.

I can't imagine that there is an overhead of several TBs...
 
Joined
Nov 29, 2015
Messages
2
I'm in a similar situation. I have a pool set up with 2 vdevs of 12x6tb configured in a raidZ2. I should be getting about 109TiB of usable space, but FreeNAS is saying 95TiB of usable space.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
95TB looks like 2 of these vdevs:

upload_2015-11-29_19-50-38.png


https://forums.freenas.org/index.php?threads/zfs-raid-size-and-reliability-calculator.28191/
 
Joined
Nov 29, 2015
Messages
2
That calculator says I should have 85.92 TiB between the two vdevs, but FreeNAS says I have ~95 TiB. Based on the math, I should have ~109 TiB.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
No, the calculator says you should have 2 * 54.57 TiB (or roughly 109 TiB) minus the overheads.

The usual culprits are snapshots; did you enabled them?

For vdevs (as this one) that aren't aligned there's also a added overhead and from what I've seen on other thread it seems to be very high is some cases.
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
For vdevs (as this one) that aren't aligned there's also a added overhead and from what I've seen on other thread it seems to be very high is some cases.
What do you mean with "that aren't aligned"? Do you mean the relevant aligning for HDs with 4K sectors (which I do have)?

Code:
zdb -C mypoolname | grep ashift
shows me
Code:
ashift=12
which should be okay.

Nevertheless those 2 TBs are missing :-(
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Nop, the ashift isn't concerned here, a RAID-Z2 of 8 drives use 6 drives for data (and 2 for parity) and 6 isn't a power of 2. I already explained the details twice this week so I'll let you search on the 2^n + p rule ;)

But in the end there's about 5 % more of overhead and unless you change the vdev(s) config (that means destroying the pool) you'll have to live with it.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
I think some old threads should be removed and a sticky created for space "issues".

The first version could be a really simple one: It is like that...
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I was actually thinking about doing a sticky on space/size/whatever when I wrote my previous post... I saw something like 3 or 4 threads this last week just on the pool/root dataset size difference subject... even if I explained everything in the FAQ... I think you agree this is too much threads per week on this subject but we can't really do anything I guess :rolleyes:
 

AMiGAmann

Contributor
Joined
Jun 4, 2015
Messages
106
a RAID-Z2 of 8 drives use 6 drives for data (and 2 for parity) and 6 isn't a power of 2
I thought that this is not important anymore, but I probably misunderstood it, sorry.

According to this page a "bad" number of drives should not lead to capacity loss anymore:
If you use the large_blocks feature and use 1MB records, you don't need to adhere to the rule of always putting a certain number of drives in a VDEV to prevent significant loss of storage capacity.

This enables you to create an 8-drive RAIDZ2 where normally you would have to create either a RAIDZ2 VDEV that consists of 6 drives or 10 drives.

Is this large_blocks feature also relevant for FreeNAS? I din't find any information about it in the forum.

But in the end there's about 5 % more of overhead and unless you change the vdev(s) config (that means destroying the pool) you'll have to live with it.
I have no problem in destroying the pool, if that does help increasing the capacity. Do you think a different configuration/VDEV would be better?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
It's not relevant (only if you have compression enabled) for performance but the space still suffer from this.

Well, you can always redo it as a 6 drives RAID-Z2 but you'll have less usable space than a 8 drives RAID-Z2, even without the extra overhead.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I really need to go to bed (yeah, I've a job now, I can't stay up until 5 am like before...) but I'll read that asap ;)
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Found more info, this time real calculations
http://hardforum.com/showpost.php?p=1041919180

Instead of the archive.org link given there (to a copy), you can read the original thread here
https://lists.freebsd.org/pipermail/freebsd-fs/2013-May/017335.html
https://lists.freebsd.org/pipermail/freebsd-fs/2013-May/017337.html

Moral of the story: AF disks are to blame. And a corollary: as far as the missing space is concerned, one loses less space percentage-wise to overhead with 9 disks in RAID-Z2, than with 8 disks in RAID-Z2.

For me that means back to a drawing board: how to redo my storage with the 9th disk...
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
So, I had the time to read the link. Not very useful but at least there's some real data.

I also read the others links. On the hardforum link he doesn't say how he goes from 35.8 TiB to 32.62 TiB in the 12x 4 TB RAID-Z2 case, and that's the only thing I wanted to know... :rolleyes:

On the last link this "Above that is allocation overhead where each block (together with parity) is padded to occupy the multiple of raidz level plus 1 (sectors)." is so unclear... I'll really need to talk to some ZFS devs once for all to understand everything and have a global PoV on the overheads problem.
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
But in the end there's about 5 % more of overhead and unless you change the vdev(s) config (that means destroying the pool) you'll have to live with it.

You don't have to live with it. You can change your dataset's "recordsize" property to 1M and then the overhead will effectively be gone. Assuming you are OK running with the larger recordsize that is.

I do this on my 12x4TB RAIDZ2 and it gained me back around 2TB lost to the ashift=12 sector padding overhead.
 
Status
Not open for further replies.
Top