Looking for advice on a 100TB Zvol

Status
Not open for further replies.

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Why are you splitting it into multiple datasets? Based on what you've posted here, I'm not seeing a benefit to multiple datasets. I'm wondering if this is a holdover from the iSCSI thinking.
 

Antairus

Dabbler
Joined
Dec 7, 2017
Messages
24
Why are you splitting it into multiple datasets? Based on what you've posted here, I'm not seeing a benefit to multiple datasets. I'm wondering if this is a holdover from the iSCSI thinking.

OMG Yes. I have to quit that don't I.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
OMG Yes. I have to quit that don't I.
You'll still want to make a sub dataset. The hierarchy that you are seeing is:

Archive (the pool) -> Archive (the root dataset) -> ZVx (sub dataset).

Directly sharing the root dataset is not a supported configuration, so you'd want to create a single sub dataset (like ZV1), and then you'd share that.
 

Antairus

Dabbler
Joined
Dec 7, 2017
Messages
24
ZV1.JPG


Ok hows this. I will split my shares on the ZV-Archive.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Looks good.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Post the results of zpool status
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959

Antairus

Dabbler
Joined
Dec 7, 2017
Messages
24
zps.jpg


So from what I can tell I am losing 12 gig of total capacity in a 36 gig array
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
So from what I can tell I am losing 12 gig of total capacity in a 36 gig array
Gig!?

You have 8 drives, so I'm not sure how you'd be at 36TB raw total space. Your total usable space will be: (Raw Space) - (Parity Space) - (Checksum space/overhead) - 20%. In this case, parity space will be 2x your drive space. Checksum/overhead space isn't that large, but it will cut you back some. Also, don't forget the decimal (marketing) to binary conversion on the HDD capacity.

There is a handy dandy ZFS HDD space calculator that will do all the calculations and give you a more exact figure.
 

Antairus

Dabbler
Joined
Dec 7, 2017
Messages
24
Gig!?

You have 8 drives, so I'm not sure how you'd be at 36TB raw total space. Your total usable space will be: (Raw Space) - (Parity Space) - (Checksum space/overhead) - 20%. In this case, parity space will be 2x your drive space. Checksum/overhead space isn't that large, but it will cut you back some. Also, don't forget the decimal (marketing) to binary conversion on the HDD capacity.

There is a handy dandy ZFS HDD space calculator that will do all the calculations and give you a more exact figure.


Yes. TB. So I have 8 6TB Drive's so 48TB. Are you saying I only lose %20 of this total space if i Use it as an SMB/CIFS Share?
Does a RaidZ2 not work like a Raid 6 Where I lose 2 Drives off the top?
This is where I am totally confused. I did not read anything that went into depth on why or where I lose the space.
I did read something on the ISCSI because it would become fragmented, but I am not using ISCSI.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Are you saying I only lose %20 of this total space if i Use it as an SMB/CIFS Share?
This is not an SMB limitation. It's not even really a ZFS limitation. First and foremost, it's a Copy on Write (CoW) Filesystem issue. If you don't know, copy-on-write means that, when you make changes to a file, a brand-new copy of the entire file is written to a new location on the disk. As you use more and more of the space, there is a growing, non-zero probability that you will have to fragment your file in order to write it (because there is not enough contiguous free space for the file). Fragmentation is an issue because it requires more IOPS to read a file. As utilization climbs through 50%, the probability of fragmentation starts increasing faster. As it climbs through 80%, the probability of fragmentation skyrockets.

With ZFS, which is a CoW filesystem, you want to keep your utilization below 80%. When you use iSCSI (or any block storage) with a CoW filesystem, you want to keep your utilization below 50%, because the filesystem is no longer aware of the actual files, and effectively sees the block storage as one big file.

Does a RaidZ2 not work like a Raid 6 Where I lose 2 Drives off the top?
RAIDZ2 is double parity, just like RAID 6, and causes you to lose 2 drives. Which is why I said: "parity space will be 2x your drive".

This is where I am totally confused. I did not read anything that went into depth on why or where I lose the space.
There's nothing particularly different about ZFS versus any other system. The one major difference is checksumming, which will only use a few hundred GB, depending on the total array size. In the "Resources" section above, there is a "ZFS RAID Size Calculator" to help with the calculations.
 

Antairus

Dabbler
Joined
Dec 7, 2017
Messages
24
This is not an SMB limitation. It's not even really a ZFS limitation. First and foremost, it's a Copy on Write (CoW) Filesystem issue. If you don't know, copy-on-write means that, when you make changes to a file, a brand-new copy of the entire file is written to a new location on the disk. As you use more and more of the space, there is a growing, non-zero probability that you will have to fragment your file in order to write it (because there is not enough contiguous free space for the file). Fragmentation is an issue because it requires more IOPS to read a file. As utilization climbs through 50%, the probability of fragmentation starts increasing faster. As it climbs through 80%, the probability of fragmentation skyrockets.

With ZFS, which is a CoW filesystem, you want to keep your utilization below 80%. When you use iSCSI (or any block storage) with a CoW filesystem, you want to keep your utilization below 50%, because the filesystem is no longer aware of the actual files, and effectively sees the block storage as one big file.


RAIDZ2 is double parity, just like RAID 6, and causes you to lose 2 drives. Which is why I said: "parity space will be 2x your drive".


There's nothing particularly different about ZFS versus any other system. The one major difference is checksumming, which will only use a few hundred GB, depending on the total array size. In the "Resources" section above, there is a "ZFS RAID Size Calculator" to help with the calculations.


Yes I read about CoW and how it worked.
So the Calc is showing me 34.37TiB Usable Space. Out of this space I can only use 80% before I take a huge performance hit?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
If you are using the calculator from the resources, you should see a line "Usable data space". This factors in the minimum recommended empty space (hence the "Minimum recommended free space" line above it). Assuming that this is the line you are looking at, then you get 100% of it.

However, it's not like something magic happens at 80% that causes everything to go wrong. There is a continuum of performance hits, and for typical work loads, 80% represents something of an inflection point. However, for your workload, 75% might be closer to your inflection point. And furthermore, 80% may represent unacceptable performance for your workload (inflection point or not). You may need 60%. Or you might get away with 85%.

Lastly, don't forget to factor in your expected data growth. If you plan on using snapshots, make sure to factor in that space.
 

Antairus

Dabbler
Joined
Dec 7, 2017
Messages
24
If you are using the calculator from the resources, you should see a line "Usable data space". This factors in the minimum recommended empty space (hence the "Minimum recommended free space" line above it). Assuming that this is the line you are looking at, then you get 100% of it.

However, it's not like something magic happens at 80% that causes everything to go wrong. There is a continuum of performance hits, and for typical work loads, 80% represents something of an inflection point. However, for your workload, 75% might be closer to your inflection point. And furthermore, 80% may represent unacceptable performance for your workload (inflection point or not). You may need 60%. Or you might get away with 85%.

Lastly, don't forget to factor in your expected data growth. If you plan on using snapshots, make sure to factor in that space.


Thank you for taking the time to explain it to me. I Understand it better now.
I was Afraid that @ 80% I would encounter some kind of catastrophic failure.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Something you want to watch for is pool fragmentation. If your pool starts fragmenting like crazy, that's an indicator that you're "too full" and will face heavy performance penalties.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
ok so my question is, can I only fill that ZV-Archive to 80%

Well, you *really* don’t want to to exceed 90%. FreeNAS will start throwing warnings at 80% and my advise is that’s when you need to start thinking about capacity expansion.

After 90% there is a performance cliff as ZFS switches to a much slower block finding algorithm to make maximum use of remaining free blocks.

The 50% usage thing with iSCSI is about having fast and fragmented free read/writes. The fuller the array the slower it is (for various reasons, for example The last part of a disk is slower than the first part)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
when you make changes to a file, a brand-new copy of the entire file is written to a new location on the disk

Only the blocks that changed, not the whole file.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
FWIW, it sounds like an 8 way RaidZ2 pool with an SMB share is what you want. It should be fine. No slog, no l2arc, no iSCSI

Setup smart tests, burn in the drives. Etc.

If you have performance issues later, then solve them after analyzing them.

You may want to look at my Build threads (see signature)
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Status
Not open for further replies.
Top