ZFS Volume Smaller Than Total of Hard Drives in Volume

Status
Not open for further replies.

rbrinson

Dabbler
Joined
Aug 1, 2012
Messages
17
Hi, I've got what I'm sure is a noob question. I set up my first FreeNAS server this past weekend using FreeNAS 8.2. I used five Western Digital 2 TB WD20EARX Advanced Format Green drives. I created a single volume using all five drives formatted with ZFS in RAIDZ configuration. I also checked the box for "Force 4096 bytes sector size" since they are Advanced Format drives. In RAIDZ, I know that one of the drives will be used for parity. So, I expected the resulting volume to be 8 TB of storage. However, the Volumes View shows that the single volume has a size of 7.1 TB. I tried detaching the volume and destroyed it and created the volume again without checking the "Force 4096 bytes sector size" checkbox. However, I ended up with the same result. Is this expected behavior? The only thing I could rationalize was that perhaps FreeNAS and/or ZFS is using the "missing" disk space for swap, ZIL, cache. If someone could let me know if this behavior is expected, or do I need to look into a potential problem. Thank you for any insight you may have.
 

Peter Bowler

Dabbler
Joined
Dec 18, 2011
Messages
21
I think that's about all you get from 4 "2TB" Drives

Your seeing the difference between the marketing departments "2 TB" and the actual storage available in a 2 TB class drive.
 

rbrinson

Dabbler
Joined
Aug 1, 2012
Messages
17
Hi, Peter Bowler. Thank you for your reply. The missing drive space amounts to about 225 GB per drive. You may be right, but I would be very frustrated with Western Digital for advertising a hard drive capacity of 2000 GB and only having 1775 GB available.

I went back and watched a video tutorial on setting up FreeNAS 8.2 BETA. In that tutorial, they also utilized 5 hard drives using ZFS in RAIDZ configuration. They also ended up with a volume smaller than the sum total of the hard drives used. Digging around on the FreeNAS documentation for hardware requirements, there is a link to Disk Space Requirements for ZFS Pools, and it seems to indicate that some of the drive space is going to be utilized by ZFS and thus not available for data storage. With four 2 TB drives, I'm not sure how much should be utilized by FreeNAS and/or ZFS, but perhaps this would account for the missing gigabytes. I guess I would feel more comfortable if someone could confirm that about 88.75% of drive space available for data storage is typical with a ZFS volume.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
Consider it confirmed. Not only aren't 2TB drives 2 binary terrabytes, but the file system (in this case ZFS) uses some for itself. It's normal. Nothing to see here. Move along, move along. Also, there's no reason to be mad at WD in particular because every other HD manufacturer does the same thing.
 

rbrinson

Dabbler
Joined
Aug 1, 2012
Messages
17
Thank you, Stephens and ben. I feel better about my hard drive situation. My OCD was kicking in! :p ben, that is an amazing blog article. I've never had a need to pay that much attention to hard drive sizes. So, I have definitely learned something today!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
While the OP was discussing displayed values, it is also worth noting that ZFS write performance tanks if you get near a full filesystem. It is probably not unreasonable, for design purposes, to reduce the expected usable space by 20-25% from what the label says. A 2TB "HDD marketing" drive works out to around 1.8TB "real world" space, but if you want to retain good ZFS performance, you really can't fill to more than maybe 1.6TB (or even 1.5TB). This is mainly an argument to always buy the next drive larger than what you think you actually need.
 
Joined
Aug 27, 2012
Messages
2
I've been using freeNas for ages. was running version 7 with no problems at all. I had 4 1TB drives in raidz and I had 3.6TB of space available. I had a drive fail the other day and I decided to back up and restart with version 8. install worked fine but when I add my same 4 1TB drives as raidz I only get 2.6GB of space. Anyone got any ideas why I get less space in Freenas 8
Thanks,
 

toddos

Contributor
Joined
Aug 18, 2012
Messages
178
I've been using freeNas for ages. was running version 7 with no problems at all. I had 4 1TB drives in raidz and I had 3.6TB of space available. I had a drive fail the other day and I decided to back up and restart with version 8. install worked fine but when I add my same 4 1TB drives as raidz I only get 2.6GB of space. Anyone got any ideas why I get less space in Freenas 8
Thanks,

Getting 3.6TB out of 4x1TB drives means that you were using raid 0 (striping, no redundancy). Your new setup sounds like you're using raid-z1, where one of the drives is used for parity so you really only have 3x1TB of space to work with. 1TB decimal = 953GB binary. (953*4)/1024 = 3.7TB, (953*3)/1024 = 2.8TB. That's where your space went. You have less space for storage, but you now have redundancy and can survive the failure of a single drive. This is infinitely better than your previous setup, where you couldn't survive any drive failure at all.
 
Joined
Aug 27, 2012
Messages
2
Getting 3.6TB out of 4x1TB drives means that you were using raid 0 (striping, no redundancy). Your new setup sounds like you're using raid-z1, where one of the drives is used for parity so you really only have 3x1TB of space to work with. 1TB decimal = 953GB binary. (953*4)/1024 = 3.7TB, (953*3)/1024 = 2.8TB. That's where your space went. You have less space for storage, but you now have redundancy and can survive the failure of a single drive. This is infinitely better than your previous setup, where you couldn't survive any drive failure at all.

That Sounds right to me. I must have been remembering things wrong. I was using raidz previously – I have had 2 drive failures (not at the same time) and resynced with no problems.

Thanks for the reply.
 
Status
Not open for further replies.
Top