12 8tb drives in a RAIDZ2 only gives me 64.2tb?

Status
Not open for further replies.

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
Doesnt seem right. Using the 0.906 conversion factor:

12 x 8tb = 96tebibytes. which would be 87terabytes.

Ok. So i get a 87tb pool of drives. subract 2 drives from that, would equal 72.5 terabytes.
But.. the storage pool i end up with is 64.2 instead.
The difference is 8.3 terabytes? what am i missing?

http://i.imgur.com/2VUOdMM.jpg

from a pool of 87TiB, i end up with 64.2TiB, which is a difference of 22.8 terabytes. thats about 3 drives worth of space missing...

what am i missing?
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
Just created a raid Z3, and it went from 64.2tib to 61.2tib. 3tb difference??

edit: checked it on Raid Z1.. and its 74.9

Ok. so with 12 8tb drives:

RAIDZ1: 74.9
RAIDZ2: 64.2
RAIDZ3: 61.2

Something bizarre here.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Yes. It's called "your zpool is too wide".

You should NOT be putting 12 drives in ZFS as a single vdev. ;)
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
Aww man. This is getting annoying. I just want a nice big single pool, no hassles, 2 drives as parity.. :(
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You can do 2x 6 disk RAIDZ2 vdevs. It'll cost you two more disks' worth of space, but it's the best option at the moment. If you need more storage, do something like 7 or 8 disk RAIDZ2 vdevs.
 
Joined
Oct 2, 2014
Messages
925
I agree with @Ericloewe and @cyberjock , or pick up 2 more drives to make up some of the loss that way as suggested above
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Aww man. This is getting annoying. I just want a nice big single pool, no hassles, 2 drives as parity.. :(
This calculator might be of some use.
Yes. It's called "your zpool is too wide". You should NOT be putting 12 drives in ZFS as a single vdev. ;)
See quote below from the ZFS Primer documentation
"Using more than 12 disks per vdev is not recommended. The recommended number of disks per vdev is between 3 and 9. If you have more disks, use multiple vdevs."
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Too wide didn't cost you excess space. You are right on the money according to the calc. The wide vdev will affect performance and resilvering. The z2 math didn't change. All the conversions just make that parity feel HUGE. Nothing kooky here zfs didn't forget how to math.

I like 6x2 myself... But testing the single wide vdev on those big drives would be a neat data point.
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
Changed it to 6x2. All seems well, copying data has started. I wish i could afford more disks :(
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Once upon a time 12 x 8TB drives was mind blowing. You are a hairs breadth away from 100TB raw. Nothing to sneeze at. Enjoy.
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
heh, yea didnt think of it like that. Now comes the long slow process of migrating all the data from the two old servers to the new one over gigabit.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
@BigDave this is a very old version (10 versions behind actually). I recommend to link the thread instead of the app because the url changes for every version (I can't fix that because I don't host the app but I plan to do so when I have the time to do it) ;)
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
@BigDave this is a very old version (10 versions behind actually). I recommend to link the thread instead of the app because the url changes for every version (I can't fix that because I don't host the app but I plan to do so when I have the time to do it) ;)
I'll edit the link in my post to reflect the updated version, thanks for the correction @Bidule0hm :cool:
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
No problem , thanks ;)
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
If you just use the large_blocks feature which FreeNAS supports, 1024K record size, then you will effectively gain back all the overhead space associated with your particular 12-disk RAIDZ2 vdev configuration.

Your volume size will still show up the way it does now since ZFS uses the assumption of a 128K record size to calculate free space and to calculate capacity. However, when using 1024K record size, the data you write to the dataset will actually appear to use less space than the size of the file, even if it's incompressible and ultimately you will fit as much data as you thought should fit on the pool with only 1/64th metadata overhead which is small.

64.2TiB is the correct freespace for a 12x8TB RAIDZ2 using ashift=12. And 64.2TiB is how much data you can store on it if using the default 128K recordsize.

However, the exact amount of data you could store on this pool if you store your data in datasets with the recordsize set to 1024K is actually 69.8TiB. Essentially every file that you write to a large_block dataset will use 9% less space and appear 9% smaller than it really truly is in the case of a 12-wide RAIDZ2.

(different vdev configurations will have different padding overheads, but the only disk configurations for RAIDZ2 that has no padding overhead would be a 6-disk RAIDZ2, the next no-overhead disk number wouldn't be until an 18-disk RAIDZ2 which isn't very practical)

It seems weird, but that's how ZFS does it's math currently with the large_block feature.
 
Last edited:
Status
Not open for further replies.
Top