Missing ~5 TiB

Status
Not open for further replies.

Doogie

Dabbler
Joined
Mar 9, 2014
Messages
24
I built a new RAIDZ2 on FreeNAS 9.10.2 using ten 8 TB (7.27 Tib) Seagate drives. I created a single volume (named "Volume") and single dataset using the entire volume. I spent the last 12 days transferring over my data--ugh, finally completed tonight!

Using Biduleohm's calculator, and per my own calculations, the expected volume size should be 58.21 TiB. However, under the Storage | Volumes tab, I see 53.5 TiB (30.8 + 22.7):

Used: 30.8 TiB (57%)
Available: 22.7 TiB
Compression: lz4
Compression Ratio: 1.00x

zpool status shows all 10 disks online.
zpool list shows:

Volume 72.5T 40.4T 32.1T - 36% 55% 1.00x ONLINE /mnt

So, two questions:

1. Where is the missing 4.71 TiB (58.21 [expected] - 53.5 [actual])? That's close to a 10% loss.
2. Why the discrepancy between the reported info under Storage | Volumes (30.8/22.7) and "zpool list" (40.4/32.1)? TB to TiB doesn't explain it.

Thanks so much!

Jeff Arnholt
 

Doogie

Dabbler
Joined
Mar 9, 2014
Messages
24
No, I've already accounted for that. The expected volume size for a RAIDZ2 using ten 7.27 TiB drives would be 58.21 TiB (or 64 TB), and I'm seeing 53.5 TiB. The question is why FreeNAS isn't reporting 58.21 TiB.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
upload_2017-1-14_9-26-43.png

The question is why FreeNAS isn't reporting 58.21 TiB
That's because it won't ever report that according to this calculator.
 

Doogie

Dabbler
Joined
Mar 9, 2014
Messages
24
Monkey, I appreciate you taking the time to look at my question. I think this is a topic of general interest--knowing precisely how ZFS calculates space. Other users are experiencing the same concerns as I am--about 10% loss of storage space on arrays using 8 TB disks:

https://forums.freenas.org/index.php?threads/seeing-less-free-space-than-expected.47490/

Let's run the numbers one last time. Using BiduleOhm's calculator, for a ten drive RADZ2 array using 7.276 TiB disks:

Usable data space = total data space - metadata overhead - minimum recommended free space

45.82 TiB = (58.21 - 0.93 - 11.46) TiB

However, the reported space in FreeNAS 9.10.2 is considerably different than this. FreeNAS reports 30.8 TiB used and 22.7 TiB available, which suggests a dataspace of 53.5 TiB--not 58.1, or even 57.5 (accounting for metadata). Assuming that FreeNAS reports the actual space available if 100% of the drive is used, not reserving a minimum recommended free space, this is (58.2 - 53.5) / 53.5, or 9% off.

I have read the forums. Cyberjock wrote last year that "I mostly ignore the overhead questions on the forum because it is impossible for someone to throw out some number and be close to the target." That's rather despairing. I would expect an answer exists for this.

The most intriguing possible answer to my question is post #38 from this thread:

https://forums.freenas.org/index.php?threads/zfs-raid-size-and-reliability-calculator.28191/page-2

SirMaster was experimenting with ashift values and found that FreeNAS was reporting available storage 9% lower than expected, which replicates my results. His conclusion was that the size of his files were also being reported as 9% lower than expected, such that the space was still efficiently and completely utilized, but underreported.

I wouldn't be concerned if we were dealing with a <1% rounding error. However, 9% is a significant difference--enough to make me consider RAIDZ1 vs RAIDZ2 in a 10 drive array to recover that amount, or enough lost space to require me to plan for earlier upgrades. A 9% error wouldn't have landed Apollo 11 on the moon. Still, having said that, I'm grateful to be able to use a free, robust system like ZFS/FreeNAS. I just wish I knew exactly what was going on.

Jeff Arnholt
 

Morpheus187

Explorer
Joined
Mar 11, 2016
Messages
61
I made a similar observation with my 8x6 TB Array

I've made the following calculations
Code:
diskinfo -v ada0p2  # to get size of disk partition
6'001'175'126'016 Bytes * 8 = 48'009'401'008'128 Bytes Array size ( With Parity )

if we deduct 2 disk that should give 48'009'401'008'128/8*6 = 36'007'050'756'096 Bytes
If I check with
Code:
 zfs get -Hp used  -> 17'087'825'394'784 Bytes

and
Code:
 zfs get -Hp avail -> 15'852'828'987'488 Bytes

15'852'828'987'488 Bytes Available + 17'087'825'394'784 Bytes Used = 32'940'654'382'272 Bytes Total
Missing: 3'053'510'955'840 Bytes or about ~9%
Code:
Summary
5999027556352 * 8 = 47'992'220'450'816 Bytes / Total Size diskinfo -v ada0p2
Available           15'852'828'987'488 Bytes / zfs get -Hp avail
Used	            17'087'825'394'784 Bytes / zfs get -Hp used
Total               32'940'654'382'272 Bytes 
Expected: 	      35'994'165'338'112 Bytes / RaidZ2
Missing: 	        3'053'510'955'840 Bytes


Maybe I'm doing something wrong, so I posted all the commands to make it clear how I got these numbers.
I did the same for the boot device ( usb stick )
Code:
boot size: 15'375'437'824
used:         659'357'696
avail:     14'163'312'640
missing:      552'767'488

I think ZFS uses space for itself to function properly
 
Last edited:
Status
Not open for further replies.
Top