Adding to freenas

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
Current setup it consists of 5 2TB discs in ZFS2 on soon to be 9211-8i
i have read a few of the pdfs made on the forum explaining what ZFS is and its parts. i have a couple questions.
Ok so what i have currently is (correct me if im wrong)
NAS (5.6TB) is my pool
inside of that is nas and that is my Vdev?
ftp, backup and share are datasets?

I cant add discs to my pool, I want to make another pool for the purpose of having it serve FTP but with a hard cap. With a cap on the ftp dataset my camera(within the camera settings) will over write video from oldest. right now i am having to go in and delete video every couple months. its not a big deal because my freenas is empty.

Its not possible to go back and put a hard cap on the ftp dataset is it?
If i make a new pool for this ftp dataset when ever happens to it will not affect my main pool(NAS)?
if i need to pull data from it i would mostly do it right away so it wouldnt need to have zfs3 protection what other options do i have besides zfs1,2,3(stripe discs)?

Any suggestions would help.
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Actually you are incorrect in most conclusions. You won’t “see” vdevs unless you display pool topography with zpool status. Vdevs are the virtual devices that underlying your pool. You then create datasets (file systems) on that pool of virtual devices. You can add more virtual devices to a pool at any time, but you can’t remove them.
A dataset can have a quota set, changed and unset at any time, no need to create a new pool for that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
discs in ZFS2
RAIDz2.
NAS (5.6TB) is my pool
inside of that is nas and that is my Vdev?
ftp, backup and share are datasets?
How can we tell you what you have?
I can't add discs to my pool, I want to make another pool for the purpose of having it serve FTP but with a hard cap.
You can add another vdev to your pool to expand capacity. You can create a dataset in your pool that has a hard cap on the amount of data that can be placed in the dataset.
Its not possible to go back and put a hard cap on the ftp dataset is it?
Why not? It is called a Quota and you can edit that property at any time.
That manual is your friend. Please look at it a lot more.
https://www.ixsystems.com/documentation/freenas/11.2/storage.html#pools
If i make a new pool for this ftp dataset when ever happens to it will not affect my main pool(NAS)?
If you do a separate pool, I don't think you need to, but if you do, each pool is fully independent of the other.
so it wouldnt need to have zfs3 protection
You need to review these documents again and try to get a better understanding:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
Thank you for all the useful information.

if a vdev is virtual disc, when i do the command zpool status, gptid/123e272d-bebf-11e7-bfa5-e0071bfffaff is an ID that is used by the file system to "tag" what disc is where.
ZFS pool is always a stripe of vdevs so if you lose a weak vdev the whole pool is gone?
If this is true its best practice to not have a vdev of 2 discs along side a vdev of 10 disc in the same pool?

Again correct me if im wrong.



Code:
 pool: NAS
 state: ONLINE
  scan: scrub repaired 0 in 0 days 01:44:15 with 0 errors on Mon Apr  1 01:44:15                                                                           2019
config:

        NAME                                            STATE     READ WRITE CKS                                                                          UM
        NAS                                             ONLINE       0     0                                                                               0
          raidz2-0                                      ONLINE       0     0                                                                               0
            gptid/123e272d-bebf-11e7-bfa5-e0071bfffaff  ONLINE       0     0                                                                               0
            gptid/12e49b36-bebf-11e7-bfa5-e0071bfffaff  ONLINE       0     0                                                                               0
            gptid/138b3d06-bebf-11e7-bfa5-e0071bfffaff  ONLINE       0     0                                                                               0
            gptid/142f426e-bebf-11e7-bfa5-e0071bfffaff  ONLINE       0     0                                                                               0
            gptid/14d3676e-bebf-11e7-bfa5-e0071bfffaff  ONLINE       0     0                                                                               0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:24 with 0 errors on Sun Mar 31 03:45:26                                                                           2019
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            da0p2   ONLINE       0     0     0
            da1p2   ONLINE       0     0     0

errors: No known data errors
 

anmnz

Patron
Joined
Feb 17, 2018
Messages
286
ZFS pool is always a stripe of vdevs so if you lose a weak vdev the whole pool is gone?
Yes, exactly.

If this is true its best practice to not have a vdev of 2 discs along side a vdev of 10 disc in the same pool?
Yes, exactly.

The pool is, roughly, only as resilient as its least resilient vdev and only as performant as its least performant vdev. So as a rule of thumb, you want all vdevs in the pool to be similar, because if a vdev is different from the others that cannot help and can only hurt.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
ZFS pool is always a stripe of vdevs so if you lose a weak vdev the whole pool is gone?
If this is true its best practice to not have a vdev of 2 discs along side a vdev of 10 disc in the same pool?
I am sure that was covered in at least one of the documents I linked you to. The recommended, best practice, is to decide what kind of pool you are going to need based on performance, capacity, IOPS etc and make the pool at that same redundancy level for all vdevs whether those vdevs are part of the pool at initial creation or added later. So, all vdevs should be co-equal. There should not be 'strong' or 'weak' vdevs. You would not make one vdev at RAIDz3 and another vdev at RAIDz1 in the same pool and in that same way, you would not create one vdev as RAIDz2 and then add another vdev that is a mirrored pair of disks (a mirror vdev) because the RAIDz2 vdev can support two disks of failure where the mirror can only suport one disk of failure. It would create the undesirable weakness, but it would also create a differential in performance between the two vdevs. You want all vdevs to have similar performance characteristics because any slow vdev will slow down the entire pool. For this reason, you would also not make one vdev of four disks and another vdev of six disks. They wouldn't be equal.

In your zpool status of your system, you see that there is a pool named NAS (not very creative or useful as names go) and the elements that are listed by gptid in that pool are the partition names of the ZFS partition on the physical disk. The pool in FreeNAS is made from partitions by gptid. It is considered more reliable because the gptid of a partition does not change under normal circumstances where it is possible for the ada# or da# of a drive to change under fairly common circumstances and those numbers do not always relate to the physical port the drive is connected to.
1554658055511.png

The highlighted element there (raidz2-0) is the vdev ID. If you had another vdev, it might be named raidz2-1 (also not creative) and each vdev should be roughly equal. I had occasion several years ago where I created a vdev of five disks and later added a second vdev of four disks (both vdevs at RAIDz1) and that does work, but it is not ideal. I did that with 1TB drives and because that is what I had to work with at the time. Later I built a better system. There are some things that ZFS will let you do that FreeNAS will try to prevent you from doing because they are not best practices but FreeNAS will not stop you from everything that is not a good idea.
1554658487972.png

In your boot pool (creatively called freenas-boot) you see that the vdev is a mirror and there is only one vdev named "mirror-0" and inside that vdev it names the partition by number da0p2 for example means da0 (disk 0) p2 (partition 2)... This likely indicates that you are using USB memory sticks for your boot drives because SATA disks show up as ada# and SAS attached disks (SATA or SAS) show up as da# but I bet you are not booting from a SAS controller. I don't know why, but there have been a lot of systems that were built this way by the installation media.
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
I am sure that was covered in at least one of the documents I linked you to.

yes, i am reading them and it has been very helpful!!! Again thank you very much.
i just want to make sure that what i am reading, and the way i am processing it is actually correct.

i did not know the details about the ada# or da# and that it would change. when i start using my lsi 9211 the tag/name(i did not see in the documentation the correct name of it) will change.

The details in your examples are very helpful to get a more solid understanding.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
The details in your examples are very helpful to get a more solid understanding.
I hope this will enhance your understanding. In a single ZFS storage pool, you can have many vdevs and each vdev should be made of an equal number of drives and at the same level of redundancy.
For example, I have a pool at work that is made of ten vdevs of six drives each, with each vdev being at RAIDz2.
Here is what that looks like:
Code:
zpool status

  pool: Pogo-60x10TB
state: ONLINE
  scan: scrub repaired 0 in 0 days 17:00:17 with 0 errors on Sat Jan 19 00:26:25 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        Pogo-60x10TB                                    ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/4b380e51-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/4c2a3fa0-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/4d316c90-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/4e279c43-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/4f3b17f7-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5048ec53-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/51a5cb98-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/52b0c4f5-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/53c02fc8-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/54b2adad-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/55c0f47a-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/56c14c23-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            gptid/584f5745-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5956d7f8-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5a5a6070-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5b564fc1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5c630294-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5d6a2431-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-3                                      ONLINE       0     0     0
            gptid/5ef1faf6-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/5ffb8961-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6105cc1d-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/62120300-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/631e28a1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/642dd1ea-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-4                                      ONLINE       0     0     0
            gptid/65d9d859-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/66de6408-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/67ee6f2c-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6905e892-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6a1e7078-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6b2a5922-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-5                                      ONLINE       0     0     0
            gptid/6cf142e9-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6e039cc3-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/6f1bdf1d-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7026b5ed-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/712a87a1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/723bc10e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-6                                      ONLINE       0     0     0
            gptid/7416f42f-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7531a5f8-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/76428144-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7760f346-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7866427c-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/798d143e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-7                                      ONLINE       0     0     0
            gptid/7b875ee7-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7c9c17f7-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7db105ca-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7ed28e84-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/7ff52ad1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/80faf9ba-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-8                                      ONLINE       0     0     0
            gptid/831d73e2-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/843431a1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/85534614-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/866f8059-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8791862e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/88abe418-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
          raidz2-9                                      ONLINE       0     0     0
            gptid/8ad8a1e1-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8bf8c94e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8d14e578-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8e2c0e1e-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/8f4586ee-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
            gptid/905d5231-f24d-11e8-bd4a-ac1f6b418926  ONLINE       0     0     0
        logs
          gptid/df8c688c-1b50-11e9-bd4a-ac1f6b418926    ONLINE       0     0     0
        cache
          gptid/e1f09019-1b50-11e9-bd4a-ac1f6b418926    ONLINE       0     0     0

errors: No known data errors
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
when you replace a disc in a pool such as that size, is the re-silvering process less intense on the other 59 discs, or do all discs run 100% during the process?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
when you replace a disc in a pool such as that size, is the re-silvering process less intense on the other 59 discs, or do all discs run 100% during the process?

Because redundancy comes from the vdevs, the impact of resilvering is also limited to the vdev that needs to rebuild a drive - in the case of the 60-drive pool, if one drive in a 6-drive vdev fails, the other five would receive an increased workload during the rebuild, and the remaining 54 drives would be unaffected.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
in the case of the 60-drive pool, if one drive in a 6-drive vdev fails, the other five would receive an increased workload during the rebuild, and the remaining 54 drives would be unaffected.
Very close.
@tfran1990 , the other 54 drives have a very low activity level, say 5 to 10% of the disk being resilvered if you call the disk that is being replaced is at (or close to) 100%, the other drives in the vdev with it will be around 30 to 35% in a 6 drive vdev. Exactly as @HoneyBadger said, the disks in the vdev with the disk being replaced is where the work is mainly done but the other drives only need to provide data at a rate that the one drive being resilvered can accept that data so the speed limiting factor is the write to the disk being resilvered and that allows the other disks in the vdev to not need to work hard. The one disk that is intensely used during a resilver is the 'new' disk that is receiving data. ALL the other disks are just laying around by the pool, catching a tan, sipping a cool drink... That one new disk, that one is digging a ditch and working as hard as possible.
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
Very close.
@tfran1990 , the other 54 drives have a very low activity level, say 5 to 10% of the disk being resilvered if you call the disk that is being replaced is at (or close to) 100%, the other drives in the vdev with it will be around 30 to 35% in a 6 drive vdev. Exactly as @HoneyBadger said, the disks in the vdev with the disk being replaced is where the work is mainly done but the other drives only need to provide data at a rate that the one drive being resilvered can accept that data so the speed limiting factor is the write to the disk being resilvered and that allows the other disks in the vdev to not need to work hard. The one disk that is intensely used during a resilver is the 'new' disk that is receiving data. ALL the other disks are just laying around by the pool, catching a tan, sipping a cool drink... That one new disk, that one is digging a ditch and working as hard as possible.

The reason RAIDz1 is a bad idea is because 1 drive(other then the one being built) could fail during a resilver, based off what you said you have an increased chance of the drive being built to fail, rather then the others running at 30-35%. So In that case 60x10 pool is bullet proof as far as drive failure goes?

The resilvering load on a vdev with 5 discs would also be lowISH as well, because the 4 drives dont have to work too hard to read faster then the rebuilding disc can write.

This is useful info about drive resilvering that i have not seen on the forum.( i dont read every post so i might have missed it) I was under the impression that the disc being read during a resilver were running full steam ahead. i saw somewhere on this forum a user was talking about 6% chance Per 1TB for for a drive to fail during the resilver process. If that is true would that pertain to the drive being built or the driveS helping build?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
So In that case 60x10 pool is bullet proof as far as drive failure goes?
I would not say it is bullet proof, I have seen two rives in the same vdev fail at the same time, which leaves no redundancy but does not cause data loss. That system was able to have the two drives that had failed both replaced at the same time to restore full redundancy but it was a little worrying. We have a full backup, but nobody wants to need to restore from backup. It takes 18 days to make a full backup of one of the servers at work that has around 350TB of data in it. Just updating the changes takes almost an hour. Anyhow. It is not a common situation, two drives in the same vdev failing at once, and each vdev is a separate failure domain. So you could have a drive fail in each vdev and not even worry. I have a server right now (at work) that has three bad drives in different vdevs so it isn't even worrying me.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
The resilvering load on a vdev with 5 discs would also be lowISH as well, because the 4 drives don't have to work too hard to read faster then the rebuilding disc can write.
True.
This is useful info about drive resilvering that i have not seen on the forum.( i don't read every post so i might have missed it) I was under the impression that the disc being read during a resilver were running full steam ahead. i saw somewhere on this forum a user was talking about 6% chance Per 1TB for for a drive to fail during the resilver process. If that is true would that pertain to the drive being built or the driveS helping build?
That number is about the unrecoverable read error (URE) rate, which is usually expressed in hard drive documentation as something like 1 in 10^14 and that is the reason we don't use RAIDz1 (RAID-5 equivalent) any more unless the data is not important or you are willing to take some chances with your data. There are many opinions and articles on URE, here is a link to one: https://news.ycombinator.com/item?id=8306499
ZFS is a bit more robust than some other file systems and much better than hardware RAID which is what most articles are actually about. Even in the forum there is information I don't exactly agree with and the quality of drives appears to be getting better over the last few years.
I always try to keep in mind that hard drives are not my friends. They are out to get me and I need to take precautions to protect my data because the individual drive is just a ticking time-bomb waiting for it's chance to ruin my day.
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
True.

That number is about the unrecoverable read error (URE) rate, which is usually expressed in hard drive documentation as something like 1 in 10^14 and that is the reason we don't use RAIDz1 (RAID-5 equivalent) any more unless the data is not important or you are willing to take some chances with your data. There are many opinions and articles on URE, here is a link to one: https://news.ycombinator.com/item?id=8306499
ZFS is a bit more robust than some other file systems and much better than hardware RAID which is what most articles are actually about. Even in the forum there is information I don't exactly agree with and the quality of drives appears to be getting better over the last few years.
I always try to keep in mind that hard drives are not my friends. They are out to get me and I need to take precautions to protect my data because the individual drive is just a ticking time-bomb waiting for it's chance to ruin my day.


Is a URE is a spot on the HDD that lost its magnetic properties?

so the whole pool will fail if you have 1 URE when re building or 1 URE per disc?EX.RAID2 Has a failed hdd and 1 other disc has a URE, so so the vdev is still safe?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
so the whole pool will fail if you have 1 URE when re building
If you were running RAIDz1 and you have a failed disk, that you are replacing, then you are running with no redundancy while there is a failed disk and having another disk fail could cause a pool failure. This is the fear that makes people use RAIDz2 instead. With RAIDz2, if you are replacing a failed disk, you still have another disk of redundancy, so an additional disk failure will not cause the loss of the pool. If you are really paranoid, you might even run RAIDz3 where you can have as many as three disk failures and survive.
Is a URE is a spot on the HDD that lost its magnetic properties?
There might be many different causes for a URE, it is simply some disk error the disk is not able to recover from.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I happen to be resilvering a disk in one of the servers I manage, so I took couple images from the GUI for example sake.

First, this is the activity on a disk in the pool that is not in the vdev where a disk is being resilvered:
Activity on other disks in the pool where a resilver is taking place.PNG

Notice, there is a big spike of activity at the start while the pool is providing metadata to the system, then just little spikes of activity.

Next, this is the activity, at the same time, on a pool disk in the vdev with the disk being resilvered:
Activity on a disk in a vdev where a resilver is taking place.PNG

Now, there is auto scaling in these graphs so look at the scale because the height of the peak is relative.

Then we have the activity of the new disk that is getting data loaded in, the one being resilvered:
Activity on the disk in a vdev that is being resilvered.PNG

This is fairly common from what I have seen, but after it gets going good the read from the source disks settles a bit.

This pool is also handling user data requests
I also grabbed an image showing the CPU utilization but this is not as representative as it might be because I am not using LZ4 compression.
I am using gzip-9 compression which is more CPU intense.
CPU Activity while a resilver is taking place.PNG


Here is the reason for the gzip-9 compression... The dataset that has 2.46 compression ratio:
compression redacted.PNG
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
Wow thank you for providing a visual.
the next question i was going to as was about Cpu load during resilver, it looks to be very minimal.
i need to read up on how compression works. finding the right compression helps min-max when trying to get the most from your freenas. i think i use the (inherit)lz4 default on all my datasets. i have the ml10 HP and the cpu usage is never any more then 5% if i could compress files tighter to use less space at the cost of 15-20 cpu usage, it would be worth it to me.
 
Top