How much free space should I have?

Status
Not open for further replies.

ilostmypants

Cadet
Joined
Feb 13, 2014
Messages
2
I just created 2 raidz2 pools consisting of 6x3TB and 10x4TB drives. So I should have about 44TB ((10*4)-(2*4)+(6*3)-(2*3)) of total storage, right?
When the pools were created in ZFS volume manager, the wizard separated/recognized the different sized drives into 2 groups which I then created 2 pools using Z2.

ZPOOL LIST says


NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
storage 52.5T 34.4M 52.5T 0% 1.00x ONLINE /mnt

ZPOOL STATUS -v says
NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/b4f5e802-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b52f93de-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b5624529-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0
gptid/b599ce60-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b5db4235-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b616bb00-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b64df5cb-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b68478dc-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b6bd6885-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b70b2717-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/b7485641-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b78549a5-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b7c5951a-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b7fb62f2-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b83ef30b-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0 gptid/b87a7b95-90f6-11e3-918a-10bf4871573e ONLINE 0 0 0


I then created a bunch of datasets on the same volume and shared them with CIFS. Explorer says on the status tab at the bottom 37.8TB free.

How much free space do I actually have? How much should I have?
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
First of all, use CODE tags whenever you post terminal output.
Let's clean this up a bit.
ZPOOL STATUS -v says
Code:
NAME                                                   STATE  READ WRITE CKSUM
    storage                                            ONLINE 0    0     0
        raidz2-0                                       ONLINE 0    0     0
            gptid/b4f5e802-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b52f93de-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b5624529-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b599ce60-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b5db4235-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b616bb00-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b64df5cb-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b68478dc-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b6bd6885-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b70b2717-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
        raidz2-1                                       ONLINE 0    0     0
            gptid/b7485641-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b78549a5-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b7c5951a-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b7fb62f2-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b83ef30b-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0
            gptid/b87a7b95-90f6-11e3-918a-10bf4871573e ONLINE 0    0     0

You're going to lose some space due to parity, ZFS overhead, and base conversion (1000 vs 1024 bytes per kB, MB, GB, TB). You lose around 4 TB to base conversion. Also, the "zpool list" output doesn't subtract parity, so you loose about 6TB to base conversion. The discrepancy between ~40 TB and 37.8 TB seems about inline with what I've seen on other systems. Whether this is down to how Windows is interpreting the available space, file system overhead, or something else, I don't know. What does 'df -h' report?

Note that your "storage" pool consists of two RAIDZ2s. I suppose this isn't horrible as you'd need to lose three disks from the same RAID to lose the whole pool, but it does make me uneasy. You've got a statistically higher chance of losing all your data than if they were separate pools, though lower than if all 16 disks were in the same RAIDZ2. It's not nearly as bad as the people that have ended up with a RAIDZ2 and single disk in the same pool. It's that sort of thing that makes me weary of your, admittedly, non-crazy arrangement.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
It would really take a ridiculous Act of God to kill this pool I think. He's got two RAIDZ2's, the larger of which has 10 drives in it. He's going to be fine.

But I definitely suggest having a cold spare, or two, around at all times. You would *NOT* want to run the pools degraded for an extra day (or week) while you waited for a new drive, since you're striped together, and have quite a few disks.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Act of god? A 10 disk RAIDZ2 is more likely to fail than a 6 disk. And with them striped the limiting factor in reliability is the higher probability.

That said, it's still RAIDZ2, so you're probably safe as long as you've got a couple spares.
 

Starpulkka

Contributor
Joined
Apr 9, 2013
Messages
179
It should be something like 40Tb? do you use compression? Does every drive have 2g swap (you can see on gpart list )
As an intresting i did found some weird resulst on going 10hdd versus 6hdd theres variable block sizes. So you might get intresting results on your machine, some time you put 10gig file on server and pool tells you that you filled it 14gig and some time you put same 10gig file but pool reports that you filled only 9gig. (this is currently working as "indended") Or maby that pool size is in some estimate mix of variable block sizes. As i said i had very intresting results on it and im not an expert on this.
Did you give every dataset same amount of sizes?
 

ilostmypants

Cadet
Joined
Feb 13, 2014
Messages
2
Thanks for the replies guys.

Note that your "storage" pool consists of two RAIDZ2s. I suppose this isn't horrible as you'd need to lose three disks from the same RAID to lose the whole pool, but it does make me uneasy. You've got a statistically higher chance of losing all your data than if they were separate pools, though lower than if all 16 disks were in the same RAIDZ2. It's not nearly as bad as the people that have ended up with a RAIDZ2 and single disk in the same pool. It's that sort of thing that makes me weary of your, admittedly, non-crazy arrangement.

I think I read in a faq (or somewhere on the forum) that the most drives in a Z2 should be 10 and having 16 drives in 1 pool is a big no-no. The goal was to have a ton of storage available and not have to worry about running out of space in one pool and having to continue in another. Is this not common?

But I definitely suggest having a cold spare, or two, around at all times. You would *NOT* want to run the pools degraded for an extra day (or week) while you waited for a new drive, since you're striped together, and have quite a few disks.

Why? Not like I would run degraded for any period of time but isn't the point of having 2 parity drives so that I can wait until a drive is sent away for repair? Does the chances of the pool failing increase when a drive is missing? I don't have them yet but I plan on getting a spare drive shortly anyway.


It should be something like 40Tb? do you use compression? Does every drive have 2g swap (you can see on gpart list )
As an intresting i did found some weird resulst on going 10hdd versus 6hdd theres variable block sizes. So you might get intresting results on your machine, some time you put 10gig file on server and pool tells you that you filled it 14gig and some time you put same 10gig file but pool reports that you filled only 9gig. (this is currently working as "indended") Or maby that pool size is in some estimate mix of variable block sizes. As i said i had very intresting results on it and im not an expert on this.
Did you give every dataset same amount of sizes?

I don't know how to interpret the gpart list output. Where would it list the swap size?
I found something interesting in the gpart list tho... it lists sector size as 512 for all drives. Yet "zdb storage | grep ashift" reports 12 (which means 4k sector from what I've found). How do I verify/ensure I have 4k sectors?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I think I read in a faq (or somewhere on the forum) that the most drives in a Z2 should be 10 and having 16 drives in 1 pool is a big no-no. The goal was to have a ton of storage available and not have to worry about running out of space in one pool and having to continue in another. Is this not common?
Not entirely accurate. There is no "recommended" limit to the number of drives in a pool. But a RAIDZ2 shouldn't go above 10. The number of vdevs is also limitless, and each additional vdev will create a higher performing pool in almost all respects.

Why? Not like I would run degraded for any period of time but isn't the point of having 2 parity drives so that I can wait until a drive is sent away for repair? Does the chances of the pool failing increase when a drive is missing? I don't have them yet but I plan on getting a spare drive shortly anyway.
No, the point of 2 parity is to have a backup repair data handy when you one disk has a problem and a second disk fails. When one disk in a RAID or pool fails the chances of another disk failing within 4 hours is something like 200%. There's plenty on the topic if you want to read up on it.

In short, you should still have a cold spare regardless of your configuration.


I don't know how to interpret the gpart list output. Where would it list the swap size?
I found something interesting in the gpart list tho... it lists sector size as 512 for all drives. Yet "zdb storage | grep ashift" reports 12 (which means 4k sector from what I've found). How do I verify/ensure I have 4k sectors?

gpart list tells you what the drives say. And many drives lie about their sector size. the ashift shows you what the smallest unit of data you can write to the pool is. Ideally this is also your sector size, but isn't required. You could do an ashift value of 16 and force ZFS to use a default of 64kbytes. You'd really hate yourself for doing it, but you can do it.

Assuming your drives aren't lying about their sector size, your physical sector size is 512 bytes but your "virtual sector size with regards to ZFS" is 4k. I put that phrase in quotes because that's not industry speak or even 100% accurate, but I think it gets the point across.
 
Status
Not open for further replies.
Top