Dataset vs Volume Space

Status
Not open for further replies.

MediaManRIT

Dabbler
Joined
Jul 14, 2013
Messages
17
I searched the forums, and haven't come up with an answer that makes sense. I'm hoping someone can shed some light on this.

I just created a new RAIDZ1-0 array of 4 6TB disks. I expected the total available space to be about 6TB x 3 disks = 18TB (one drive for parity, aka RAID5). Sure enough, the volume displays as 16.4TB, which is about right (rounding errors in 1024 vs 1000).

However, the dataset that got created is showing only 11.4TB available. Somehow it lost another 4 TB! It's almost like it's a RAID 10 now.

Anyone have any ideas?

Thanks!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If you can, please supply the output of the following commands in <CODE> tags;

zpool status
zpool list -v
zfs list -t all


By the way, the -0 at the end of RAID-Z1 is the first vDev indicator. So under normal conditions, you leave it off for discussion purposes, unless you have more than 1 vDev. Which zpool status will tell us.
 

MediaManRIT

Dabbler
Joined
Jul 14, 2013
Messages
17
Here's the output. I stripped out the non-relevant arrays (like boot):


Code:
root@guido:/mnt/array2/Brian/brian # zpool status
  pool: array3
 state: ONLINE
  scan: none requested
config:

		NAME											STATE	 READ WRITE CKSUM
		array3										  ONLINE	   0	 0	 0
		  raidz1-0									  ONLINE	   0	 0	 0
			gptid/4a652de0-c968-11e8-a0fd-bc5ff4af3d88  ONLINE	   0	 0	 0
			gptid/4b28748b-c968-11e8-a0fd-bc5ff4af3d88  ONLINE	   0	 0	 0
			gptid/4bee82d9-c968-11e8-a0fd-bc5ff4af3d88  ONLINE	   0	 0	 0
			gptid/4cc1eca3-c968-11e8-a0fd-bc5ff4af3d88  ONLINE	   0	 0	 0

errors: No known data errors


root@guido:/mnt/array2/Brian/brian # zpool list -v
NAME									 SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
array3								  21.8T  5.38T  16.4T		 -	 0%	24%  1.00x  ONLINE  /mnt
  raidz1								21.8T  5.38T  16.4T		 -	 0%	24%
	gptid/4a652de0-c968-11e8-a0fd-bc5ff4af3d88	  -	  -	  -		 -	  -	  -
	gptid/4b28748b-c968-11e8-a0fd-bc5ff4af3d88	  -	  -	  -		 -	  -	  -
	gptid/4bee82d9-c968-11e8-a0fd-bc5ff4af3d88	  -	  -	  -		 -	  -	  -
	gptid/4cc1eca3-c968-11e8-a0fd-bc5ff4af3d88	  -	  -	  -		 -	  -	  -

root@guido:/mnt/array2/Brian/brian # zfs list -t all
NAME																				USED  AVAIL  REFER  MOUNTPOINT
array3																			 3.91T  11.4T  3.91T  /mnt/array3



Thanks for the help!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
root@guido:/mnt/array2/Brian/brian # zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
array3 3.91T 11.4T 3.91T /mnt/array3
3.91 TiB used + 11.4 TiB available = 15.3 TiB. So what is the problem?
 

MediaManRIT

Dabbler
Joined
Jul 14, 2013
Messages
17
I've been studying this...I think the math lines up, but if you or someone could please prove I'm not completely crazy :)

21.8T Total space on all 4 drives
3.91T Used / 5.38T Allocated = about 72% efficiency
21.8T Total Space * 72% = ~15.8T Usable Theory Space
3.91T Used + 11.4T Avail = 15.31T on the dataset

Seems to line up...just probably all of the 1000 vs 1024 conversions going on resulted in me having too high of expectations.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
There's also filesystem overhead to consider (around 3%), as well as RAIDZ1's variable stripe width. Some of the better calculators will allow for all that. But yeah, it's right about as expected.

The thing I think you were missing is that "available" means free space, not total space.
 

MediaManRIT

Dabbler
Joined
Jul 14, 2013
Messages
17
There's also filesystem overhead to consider (around 3%), as well as RAIDZ1's variable stripe width. Some of the better calculators will allow for all that. But yeah, it's right about as expected.

The thing I think you were missing is that "available" means free space, not total space.

Thanks!
 
Status
Not open for further replies.
Top