SOLVED 7TB missing????

Status
Not open for further replies.
Joined
Jul 13, 2013
Messages
286
Newly created volume (pool) and filesystems. No data -- no bulk copy from anywhere, and just one new directory and an authorized_keys file added. So there shouldn't be any confusion about space yet.

Why are there two levels of "zzback" with different sizes? The outer one looks like the right size -- what's physically installed is 6x6TB in RAIDZ. So why does the inner zzback have 7TB less available? (Local I created myself, and have created on home directory for a local user in it.)

Clipboard01.jpg
 
Joined
Jan 9, 2015
Messages
430
You loss 6TBs for the parity disk. There is also some overhead space for ZFS for Metadata and such.
 
Joined
Jul 13, 2013
Messages
286
6x6 is 36, minus 6 for parity is 30; and I would expect the subtraction for parity to already appear at the top level. Isn't that the "volume" (pool). (I'm much more familiar with Solaris ZFS, so the terminology here sometimes escapes me.)

This may get back to what exact construct each level of that table reports. Is the first zzback the volume (pool), or something else? What is the second zzback? "Local", at least, I recognize as a filesystem; one I created myself.

And *7TB* going missing is wrong for 6TB drives. So any way I look at it the numbers don't add up. (The FreeNAS reports are in 1024^n units whereas the disks are sold in 1000^n units, so I'm used to a 6TB disk being reported as 5.3TB or some such in an OS. So the difference being *7* TiB where the difference in disk sizes is *6TB* is hard to reconcile.)
 

rsquared

Explorer
Joined
Nov 17, 2015
Messages
81
The top level doesn't subtract parity... The conversion from TB (1000^n) to TiB (1024^m) is more like 5.45 for your 6 TB disks. Multiply that by 6 disks, and you get the 32.5 TiB shown for the pool.

The second number subtracts 5.45 for parity, and as DifferentStrokes noted, there's also "overhead space for ZFS for Metadata and such", which in your configuration appears to be about another 1.8 TiB.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Actually I have 5.46 TiB for a 6 TB drive, 32.7 TiB for the whole RAID and 27.3 TiB for the data space.

Then you need to account for the overheads and as you're not using 3, 5 or 9 drives the pool isn't aligned and this overhead is pretty big, something like 5 % (at least two others members also see a big overhead due to not aligned pool).
 
Joined
Jul 13, 2013
Messages
286
Talk to me about this "unaligned pool" thing; never heard of that before. (I've got 6 disk control ports on this motherboard, so that's pretty much a given for me unfortunately. )
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
@David Dyer-Bennet, you can continue to use pure ZFS terminology.

If I may ask, could you please repeat creating your pool this time with only 5 disks and post here the same screenshot ? Please... Thank you in advance!

P.S.
I am one of the guys who has in his system more than 1TiB "missing", although I am thinking that it just our inability to "properly calculate ZFS overhead".
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Basically you want the number of data drives to be a power of two because the blocks have a power of two length. With RAID-Z1 it's 2+1, 4+1 or 8+1. Otherwise you lose space and you have lower perfs.

With compression enabled (and it's enabled by default on FN 9.3) the blocks aren't the same size anymore and you don't have the perfs issue anymore but for some reason you still have the space issue (but I don't know if it's the same % as with compression disabled).

As @solarisguy said the ZFS overheads calculations are very very very complex. I started to read on the subject but there's not enough info online to understand everything (besides reading the code of course...) so I'll have to ask directly some ZFS devs as soon as I have time to understand all the overheads.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
[...] I would expect the subtraction for parity to already appear at the top level. Isn't that the "volume" (pool). (I'm much more familiar with Solaris ZFS, so the terminology here sometimes escapes me.)

This may get back to what exact construct each level of that table reports. Is the first zzback the volume (pool), or something else? What is the second zzback? "Local", at least, I recognize as a filesystem; one I created myself. [...]
The first line is the raw size of your RAID-Z (raidz) pool. Think of output from zpool list.

The second line shows the size of the dataset that got created when you created the pool, and that dataset is associated (one to one) with a ZFS filesystem. Think of output from zfs list.
 
Joined
Jul 13, 2013
Messages
286
Joined
Jul 13, 2013
Messages
286
Given the RAID preferences, kind of unfortunate that the hardware likes sizes of 6 (the number of controllers on many motherboards), 8 (the number of bays you can easily get in a tower case), and then 12 and 24 when you get into actual rackmount servers. At higher numbers and when you start adding external "shelves" whatever those are, the mutually-prime aspect of the numbers starts to become less annoying, but at the smaller levels I'm working at it's rather a pain.

I don't care about performance that much, but I care a lot about space efficiency, since basically we're doing stuff (not-very-commercial video projects) that rather stresses our storage budget (which is whatever spare change we haven't spent on cameras and lenses). This new server is to back up the old production server more formally (and do it remotely).
 

Fuganater

Patron
Joined
Sep 28, 2015
Messages
477
If you don't care about performance or redundancy then just do RAIDZ1.
 
Joined
Jul 13, 2013
Messages
286
If you don't care about performance or redundancy then just do RAIDZ1.

If I didn't care about redundancy I wouldn't be bothering with ZFS in the first place. But I'm certainly running my backup server at RAIDZ1; we simply can't afford to go higher (partly because the whole chassis we're using is spare parts we already own; well, not spare parts, they were all part of one working system, but now that whole system is spare, so it's becoming the backup server).
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
@David Dyer-Bennet, thank you.

In case you are considering a RAID-Z2 with 8 disks 6TB each, the resulting dataset would be 30TiB, and not 32TiB as could be expected. (That is my experience I share with other forum members.)
 
Joined
Jul 13, 2013
Messages
286
"Solved" isn't exactly right, but I think I've learned what there is to learn starting from that initial question.
 
Status
Not open for further replies.
Top