Slow Read and Write Speeds On Server Build.

Ronanmck

Cadet
Joined
May 29, 2022
Messages
4
Hi, I built my first FreeNas Server a year ago. I upgraded my PC and so used my old components and some new drives to create the build. I am a photographer and videographer and so I shoot a lot of content and need s reliable backup for clients' information. My current setup is great for basic file storage, however, I am getting very sparatic and slow read and write speeds meaning I can't actually edit directly from the server, instead, storing information on a local SSD to work off and transferring back and forth from the server. I get a maximum Write speed of about 112mbs but can drop as low as 20 during transfer.

My current setup totals 20TB and I filled this in a year... so time to upgrade. I have just ordered 7 16TB WD red Pro Drives, however before I put them in I would like to get my speeds fixed. I am thinking it possibly may be due to little ram and so this not being available as a Cache, however, I am also reading that I could install an M.2 to use as a cache to help speed up write speeds. I am very new to this world and any advice would be appreciated.

Thank You
  • Asus MAXIMUS VIII HERO
  • Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz
  • 8GB DDR4
  • 7 WD Red 4Tb in Raid Z1 & 128GB SSD boot drive
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
RAM is indeed low, way low. Try 16, 32 or preferably 64. ZFS loves RAM
CPU is fine
Check the drives - are they SMR or CMR - given that they are WD Red 4TB - might be SMR. What model numbers are they? This may well explain the slowdown
Network Card is 1Gb (Intel) so should be OK. This would match your 112Mb/s

Cache (L2ARC) will not help - it doesn't do what you think it does and only effects reads, not writes and you don't have enough memory for L2ARC

As an aside, are you going to build a new Pool and then transfer to it or upgrade your existing pool? If a new pool then consider please Z2, rather than Z1 for data safety as the rebuild time for a degraded pool of 16TB disks is too long. If the disks you have currently are SMR then you do not want to do an inplace upgrade due to SMR problems.
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I get a maximum Write speed of about 112mbs but can drop as low as 20 during transfer.
You provided a clue to what might be happening yourself:
My current setup totals 20TB and I filled this in a year
What's the pool utilization?
It might seem odd, ZFS and TN warns at utilization of 80% and 90% possibly 95%.
This is not only to "remind the user" but it also tells that ZFS will start managing additional usage differently, in ways that simply put, will cause a slower experience.
I've pushed a pool upwards 90% and it would crawl to a speed equivalent of 1/10th of a single drive speed.
Your speed dropping to the 20's would not seem strange to me, if pool utilization is >80%.

As what has become somewhat of a usual suspect the last days, I' agree with @NugentS on the overall analysis.
- For your new setup; must look to increase RAM. Minimum 32GB is my opinion.
- Definitely check the drives, there are threads on that topic on the forums. (WD40EFRX is fine, WD40EFAX for example is not. Double check this and make research on your particular make and model)


Considering your new drive layout, I would personally do the following;
1. Create a new pool using the new drives.
- RaidZ2 - 7drives wide vdev of new drives.
Then migrate the old data to the new pool (unless you can safely restore it from another source - you have here the option to gain some "balancing" of the load on the drives, but if your data grows as quickly as it does, this will likely not matter a lot. Probably because most new data is what is being worked on - will be split between the two vdevs)

2. Add your old drives to the new pool;
- Raidz2 - 7 drives wide vdev of 4TBs

Having two vdevs will basically act "striped" in terms of performance. A lot to be gained here.

Cheers, Dice
 
Last edited:

Ronanmck

Cadet
Joined
May 29, 2022
Messages
4
Hi Dice & NugentS, Thank you both for taking the time to help here and for some very useful information.

RAM is indeed low, way low. Try 16, 32 or preferably 64. ZFS loves RAM
CPU is fine
Check the drives - are they SMR or CMR - given that they are WD Red 4TB - might be SMR. What model numbers are they? This may well explain the slowdown
Network Card is 1Gb (Intel) so should be OK. This would match your 112Mb/s

Cache (L2ARC) will not help - it doesn't do what you think it does and only effects reads, not writes and you don't have enough memory for L2ARC

As an aside, are you going to build a new Pool and then transfer to it or upgrade your existing pool? If a new pool then consider please Z2, rather than Z1 for data safety as the rebuild time for a degraded pool of 16TB disks is too long. If the disks you have currently are SMR then you do not want to do an inplace upgrade due to SMR problems.
Starting with NugentS' reply, regarding drives, unfortunately, it looks like I do indeed have SMR drives. This is a bit of a melt, but with the new drives on the way, I suppose it isn't the worst. Thank you for pointing this out to me, it wasn't something I was aware of.

Thank you for confirming the RAM shortage being an issue, I will pick up 64GB for the new setup, does speed make a difference for this, should I look at getting the fastest RAM possible, or just what is a good price?

You comment here on the Network Card, if I am looking to get the fastest speeds possible, do you think picking up a faster network card is worth it? I would rather that than have a bottleneck in the card.

As for the L2ARC, would it be worthwhile doing this anyway? I suppose I view this as if I could get an improvement in the Read speed then this may help with allowing me to edit off the server instead of transferring to a local drive?

As for the New build setup, I am planning to transfer all of the data that is on the server currently o a single local drive and then delete this pool, add the new drives, and then proceed with the server with the single new pool made of 16TB drives. I had been using RaidZ1 due to the small drives and wanting to maximize the amount of storage. I am open to using RaidZ2 on the new build if you both recommend this? I had always viewed it as less of a concern as I have a full server backup on google drive (i have an automated cloud backup set up) so should I lose more than one drive I could always rebuild with the data stored on the cloud.

Do you have any recommendations for a better way to handle building the new server? My current setup is maxed out at 7 drive bays, hence why I haven't planned on just adding the new drives and transferring the data. This also means I will not be using the current drives in the new setup, however, I wouldn't want to anyway now that I know they are SMR drives.

You provided a clue to what might be happening yourself:

What's the pool utilization?
It might seem odd, ZFS and TN warns at utilization of 80% and 90% possibly 95%.
This is not only to "remind the user" but it also tells that ZFS will start managing additional usage differently, in ways that simply put, will cause a slower experience.
I've pushed a pool upwards 90% and it would crawl to a speed equivalent of 1/10th of a single drive speed.
Your speed dropping to the 20's would not seem strange to me, if pool utilization is >80%.

As what has become somewhat of a usual suspect the last days, I' agree with @NugentS on the overall analysis.
- For your new setup; must look to increase RAM. Minimum 32GB is my opinion.
- Definitely check the drives, there are threads on that topic on the forums. (WD40EFRX is fine, WD40EFAX for example is not. Double check this and make research on your particular make and model)


Considering your new drive layout, I would personally do the following;
1. Create a new pool using the new drives.
- RaidZ2 - 7drives wide vdev of new drives.
Then migrate the old data to the new pool (unless you can safely restore it from another source - you have here the option to gain some "balancing" of the load on the drives, but if your data grows as quickly as it does, this will likely not matter a lot. Probably because most new data is what is being worked on - will be split between the two vdevs)

2. Add your old drives to the new pool;
- Raidz2 - 7 drives wide vdev of 4TBs

Having two vdevs will basically act "striped" in terms of performance. A lot to be gained here.

Cheers, Dice

As for what Dice has mentioned, I have actually recently removed a lot of data from the sever so I am back down to a 69% utilization, and the speeds I was quoting were from this. I was surprised that it warns about storage so soon and definitely saw the drop in speeds as a result of this.

Thank you for the information on the drives, these articles were really useful!

Thanks again for the help and cant wait to hear more from both of you on this!
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
RAM Speed: Buy to match the CPU & M/B. So DDR4-2133. Do not overclock. If faster is the same price then it won't do any harm - it just won't help
The HDD's as SMR are not suitable for ZFS. The new ones won't be SMR but it does mean you are not doing an inplace upgrade.
10Gb is the next sensible step up (2.5/5 is junk in my opinion). Get a server board from Intel. You will probably require a switch as well. Remember your client(s) will also need to be 10Gb to gain anything from this.

L2ARC - don't bother initially. After you have used the newer system for a while look at the ARC Hit Rate. If 90%+ then you don't need L2ARC. If its less then come back here and ask about it. It also depends on the size of a file you are trying to edit - will it fit in RAM? How big are the individual files you work with?
Remember Z1/Z2 is about bulk storage - not IOPS. But if you can work in RAM (on the server) things should go quite quickly. 10Gb will help here - but RAM is the main bit
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
RAIDZ1/2 is a matter of personal opinion and safety. Z1 will be fine, until things go wrong and thats a lot of data to have to restore from the cloud. I would always use Z2 on disks of that size.
Remember you can't add a disk to a vdev - so if you use one of the new 16TB as a temp storage drive - then you won't be able to use it for the pool
Buy an extra 16TB disk and keep it as a cold spare.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Do you have any recommendations for a better way to handle building the new server? My current setup is maxed out at 7 drive bays, hence why I haven't planned on just adding the new drives and transferring the data. This also means I will not be using the current drives in the new setup, however, I wouldn't want to anyway now that I know they are SMR drives.
Fair point.
Beware that transferring everything on a single drive makes it a point of failure—and that a 16 TB drive may not be enough if you have filled 20 TB.

As for raidz1/raidz2, there's some math here.
It's good that you have an external backup (because "ZFS is not a backup"), but is Google Drive going to hold if your storage grows by 20 TB/year? Personally, I'd first make the main storage as resilient as practically possible.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Of course you could just build the new pool and spend the next x days (or is it weeks) restoring from the backup and clogging up your internet connection or more sensibly some combination of the two (back up as much as possible to a single 16TB, and restore the rest from the cloud)
:cool:

The OP has 69% of 6*4=24TB = 16.56TB approximately. Thats not fitting on a 16TB drive unless you can compress it quite a bit (for photos / video) remembering its probably already compressed with LZH if it can be.

Z2 is the way, even with an offsite backup
 

Ronanmck

Cadet
Joined
May 29, 2022
Messages
4
Hey Guys,

Thanks for all the help and sorry for the delay in responding. I have been pulling all the data off the server and splitting it across a single 16TB drive and a 6TB external HDD. I picked up 64GB of ram and 2 10g NICs. I think I am all ready to go but am wondering if there is anything specific I need to do in the process of rebuilding? Can I just power down and plug the new drives in and then build a new pool or do I need to clear the current one first? Is there any process of setup for the NIC also??
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
It is cleaner to "export" the old pool to remove it from the configuration, but otherwise it's just "plug in new drives and set up".
The new NIC may need some basic configuration if the NAS is supposed to be accessed from a static IP.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Just remember to come back and tell us how it went.
I assume that the 16TB drive you are using as a temp drive will become a cold spare (which is a good thing to have). So you are going with 6 drives in Z2.
At some point in the future you will be able to add the final 16TB to the pool for some extra space when that feature is added. However to use that properly make sure that no single dataset is > say 35% of the entire pool and that you have 20% space available overall IF you double the space of the single largest dataset for a brief period (and it is a very brief period)
[No harm in planning for the future]

Question - why have you got 2 10Gb NICs or is one for your client PC
 

Ronanmck

Cadet
Joined
May 29, 2022
Messages
4
So I made the change today, I believe everything went mostly smooth although a few questions did arise. I installed the 64GB of ram but left in the previous 8GB stick, I now seem to have the 64GB reading but the 8GB is not showing in the WebUI. is there a reason for this?

I installed 7 16TB drives, I have 8 drives in total and one will become a cold spare as mentioned after restoring the data. I am glad to hear they are working on a feature to add drives to a pool. I have created a single pool with the 7 drives in raid Z2, this has resulted in 65.7TB of storage, does this seem low to you? I was expecting 72TB. I also created the Pool, Data Set, and SMB share with mostly all basic standard config settings, is there anything you think I should do specifically for my use case? I won't be populating with data until I hear a few responses,

After some playing around I got the NIC up and running, and all seems to be working well. To answer the previous question the second card was in fact for the client PC.

Overall I have to say the project is a big success, the photos attached show my read and write speeds from a quick test, 500MB/s Write and 1.1GB/s Read!! I will update the thread with more speed tests as I populate the server.
 

Attachments

  • PXL_20220624_125620216.jpg
    PXL_20220624_125620216.jpg
    301.9 KB · Views: 352
  • PXL_20220624_144727082.jpg
    PXL_20220624_144727082.jpg
    193.6 KB · Views: 350
  • PXL_20220624_144803369.jpg
    PXL_20220624_144803369.jpg
    189.9 KB · Views: 337

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
The max memory that your CPU will address is: 64 GB - So might as well take the extra out
Pool size sounds about right TB != TiB

All seems good to me - just use it. on.
 
Last edited:
Top