Slow NFS/iSCSi Performance

Status
Not open for further replies.

Larry C

Cadet
Joined
Feb 3, 2017
Messages
6
Hello, I am new here I have been digging into this for over a week and haven't gotten anywhere, I was hoping someone could look this over and see if i am being stupid

My setup
Freenas 9.10 stable
Core i3-4370
Supermicro MBD-X10SLH-F-O
32gb DDR3 ECC
intel VT quad port pcie nic
2x LSI 9210-81 firmware 20
16gb corsair flash drive for OS
10x4tb seagate 5900rpm
5x2tb WD Re4 7200rpm
procurve V1920-24g - storage switch
procurve 1810-8g - vmotion switch
3com managed 24g lan switch

no slog/l2arc drives

network configurations I tested
freenas single 1gb connection to procurve
freenas 4x1gb lagg to procurve
freenas 2x1gb lagg for NFS 2x1gb lagg for iSCSi

all clients are single 1gb connections
esxi servers have 2x1gb connections for storage
All nics are intel single or quad

-----------------------------------------------
NFS shares
1 - 5x4tb - Plex storage - 12tb used raidz
2 - 5x4tb - ESXi Slow storage - 1tb used raidz

Tried both NFS/iSCSi
3 - 5x2tb - ESXi Fast storage - 0% used raidz

All testing was done with the fast storage, I started out having it setup as NFS and then switched it to iSCSi I would prefer to use NFS if i can get a decent speed out of it.

I have a hardware raid running on one of my esxi hosts i am using for testing as well.
All tests were done with iostat
If I copy from:
hardware raid to the NFS fast storage - 14mb/s per drive
from desktop to NFS using esxi datastore browser - .07mb/s per drive
I have tried a few other tests from various machines i can never get higher than 15mb/s

Other things I tried:
Turning off sync didn't make a difference
Switching from NFS to iSCSi for fast storage didn't make a difference.


I can pick up some ZIL and SLOG drives but based on everything I have been reading it doesn't seem like I should need them.

Any input is appreciated.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Welcome to the forum!

You might consider running the newest version of FreeNAS 9.x: 9.10.2-U1 (86c7ef5). I had NFS problems with 9.3 STABLE when I used it a couple of years ago.

Adding more memory would be beneficial, the docs state that "For iSCSI, install at least 16 GB of RAM if performance is not critical, or at least 32 GB of RAM if good performance is a requirement." So 32GB is the minimum recommended amount... and FreeNAS loves RAM. :)

How have you configured these pools? As RAIDZ2? Mirrors give the best performance, because they deliver more IOPS: IOPS scale with the number of vdevs, so the more vdevs the better, especially when it comes to providing block storage. A pool made up of 10 drives configured as 5 mirrored vdevs will deliver 5 times as many IOPS as the same ten drives in a 10-drive RAIDZ2 array. This is why mirrors are recommended over RAIDZ'n' when the goal is to provide ESXi datastores.

Most (all?) users implementing virtual machine datastores turn synchronous writes off for the dataset and install a suitable SLOG device (Intel DC S3700 SATA SSD, Intel 750 NVMe SSD, Intel DC P3700 NVMe SSD) because performance is awful without one when sync writes are disabled. Here's some reading matter on this subject:

https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
http://nex7.blogspot.com/2013/04/zfs-intent-log.html

Are you doing anything out-of-the-ordinary with your network setup, such as jumbo frames? FreeNAS 9.10.2-U1 comes with iperf 2.0.5 installed. You can install the same version on client machines and test your network connection speeds.

Good luck!
 

Larry C

Cadet
Joined
Feb 3, 2017
Messages
6
Welcome to the forum!

You might consider running the newest version of FreeNAS 9.x: 9.10.2-U1 (86c7ef5). I had NFS problems with 9.3 STABLE when I used it a couple of years ago.

I am running the newest version of freenas 9.10 fully updated

Adding more memory would be beneficial, the docs state that "For iSCSI, install at least 16 GB of RAM if performance is not critical, or at least 32 GB of RAM if good performance is a requirement." So 32GB is the minimum recommended amount... and FreeNAS loves RAM. :)

I have 32gb of ram already the board is maxed

How have you configured these pools? As RAIDZ2? Mirrors give the best performance, because they deliver more IOPS: IOPS scale with the number of vdevs, so the more vdevs the better, especially when it comes to providing block storage. A pool made up of 10 drives configured as 5 mirrored vdevs will deliver 5 times as many IOPS as the same ten drives in a 10-drive RAIDZ2 array. This is why mirrors are recommended over RAIDZ'n' when the goal is to provide ESXi datastores.

I am using a raidz because each pool is 5 drives.

Most (all?) users implementing virtual machine datastores turn synchronous writes off for the dataset and install a suitable SLOG device (Intel DC S3700 SATA SSD, Intel 750 NVMe SSD, Intel DC P3700 NVMe SSD) because performance is awful without one when sync writes are disabled. Here's some reading matter on this subject:

https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
http://nex7.blogspot.com/2013/04/zfs-intent-log.html

Just using the NFS as general non datastore storage yields the same speed with sync on or off it didn't make a difference. I can get a slog drive, through reading through the stickes it appeared that I wouldn't need one

Are you doing anything out-of-the-ordinary with your network setup, such as jumbo frames? FreeNAS 9.10.2-U1 comes with iperf 2.0.5 installed. You can install the same version on client machines and test your network connection speeds.

Only a lagg but testing without the lagg yielded the same results.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
My apologies, @Larry C - I misunderstood which FN version you're using.

My point about mirrors is to suggest that you might be better served by a single pool of mirrored vdevs. Using 10 drives would give you 5 times the IOPS you're getting from your RAIDZ pool now. That would help... a lot. Because your 5-wide RAIDZ pool only delivers the IOPS of one drive; 1/5th the IOPS you'd get from a 10-drive mirrored pool.

With only 32GB of RAM an L2ARC is likely to adversely affect performance as it would use memory for overhead that will then be unavailable.

Here's some reading matter regarding NFS, iSCSI, and ESXi - there are many more similar discussions here you can find with the search feature:

https://forums.freenas.org/index.ph...xi-nfs-so-slow-and-why-is-iscsi-faster.12506/
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Most (all?) users implementing virtual machine datastores turn synchronous writes off for the dataset and install a suitable SLOG device (Intel DC S3700 SATA SSD, Intel 750 NVMe SSD, Intel DC P3700 NVMe SSD) because performance is awful without one when sync writes are disabled.

You've got that backwards. sync=always (aka on) is what you want for VM's + fast SLOG.

It sounds like you need to do some local dd testing on your pool to see what local read/write speeds you can get and then as Spearfoot says...test with iperf.
 

Larry C

Cadet
Joined
Feb 3, 2017
Messages
6
My apologies, @Larry C - I misunderstood which FN version you're using.

My point about mirrors is to suggest that you might be better served by a single pool of mirrored vdevs. Using 10 drives would give you 5 times the IOPS you're getting from your RAIDZ pool now. That would help... a lot. Because your 5-wide RAIDZ pool only delivers the IOPS of one drive; 1/5th the IOPS you'd get from a 10-drive mirrored pool.

With only 32GB of RAM an L2ARC is likely to adversely affect performance as it would use memory for overhead that will then be unavailable.

Here's some reading matter regarding NFS, iSCSI, and ESXi - there are many more similar discussions here you can find with the search feature:

https://forums.freenas.org/index.ph...xi-nfs-so-slow-and-why-is-iscsi-faster.12506/

That may help but I don't have 10 matching drives to use for a mirror, I only have 5x4tb drives and 5x2tb drives available the other 5x4tb drives are already in use.


You've got that backwards. sync=always (aka on) is what you want for VM's + fast SLOG.

It sounds like you need to do some local dd testing on your pool to see what local read/write speeds you can get and then as Spearfoot says...test with iperf.

Would I be better off with a SLOG or a l2arc?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
You've got that backwards. sync=always (aka on) is what you want for VM's + fast SLOG.

It sounds like you need to do some local dd testing on your pool to see what local read/write speeds you can get and then as Spearfoot says...test with iperf.
Good catch! I intended to say 'on'. Dooooh! :rolleyes:
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
That may help but I don't have 10 matching drives to use for a mirror, I only have 5x4tb drives and 5x2tb drives available the other 5x4tb drives are already in use.
I see. With those drives you could configure a single pool of 4 vdevs: 4 of the 4TB drives in 2 mirrors + 4 of the 2TB in 2 mirrors. You'd have two drives left over, but this would give you ~12TB of faster storage, with 4 times the IOPS you're getting now. In fact, you could pair up the 'extra' 2TB and 4TB as a fifth mirrored vdev and get ~14TB of storage with 5 times the IOPS. Just a suggestion.
Would I be better off with a SLOG or a l2arc?
As I said earlier, you probably won't see much, if any, benefit from an L2ARC device, and it's pretty well standard procedure to install a SLOG device on datasets used for ESXi datastores, with synchronous writes turned 'on'.
 

Larry C

Cadet
Joined
Feb 3, 2017
Messages
6
I see. With those drives you could configure a single pool of 4 vdevs: 4 of the 4TB drives in 2 mirrors + 4 of the 2TB in 2 mirrors. You'd have two drives left over, but this would give you ~12TB of faster storage, with 4 times the IOPS you're getting now. In fact, you could pair up the 'extra' 2TB and 4TB as a fifth mirrored vdev and get ~14TB of storage with 5 times the IOPS. Just a suggestion.
As I said earlier, you probably won't see much, if any, benefit from an L2ARC device, and it's pretty well standard procedure to install a SLOG device on datasets used for ESXi datastores, with synchronous writes turned 'on'.

Ok cool, when I get home i will give that a try, I also just bought a 400gb intel s3700 for a slog drive so that should be here in a few days. Hopefully i can get this sorted out soon its holding up my lab study time.

Thank you for your help
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Ok cool, when I get home i will give that a try, I also just bought a 400gb intel s3700 for a slog drive so that should be here in a few days. Hopefully i can get this sorted out soon its holding up my lab study time.

Thank you for your help
You're welcome! Let us know how things work out.
 

GangstaRIB

Cadet
Joined
Jan 29, 2017
Messages
6
Remember with any Raid Z write speeds are going to be the same (if not worse) than a single drive. Even with mirrors, sequential writes (sync=always) I was getting 50MB/s (no SLOG). RaidZ + sync write speeds will be awful. A SLOG isn't magic but its helps you get a more consistent write speed out of your platters. For what its worth i've only been FreeNASing for about a week... but with your platters you will get 90MB/s SEQ write. In Raid Z it will be 80-90 no matter how many drives... stripe/mirror 4 drives 180MB/s. Start doing random writes with sync on RaidZ that number is going wayyyyy down but with a good SLOG and striped mirrors on 4 drives you will probably get close to that 180MB SEQ write max on random writes... otherwise actual throughput could be more like 10MB/s on RaidZ with sync=always (and no SLOG).

I'm sure I'll get chewed up a bit... anyone with more experience feel free to jump in.... But TLDR... A Slog on RaidZ will bring seq sync writes up to MAYBE 90MB/s with your setup (assuming sync=always)... if you don't want to burn space on mirrors a SLOG probably isnt for you.

Also speaking of 'wasting space' in an ESXi environment you shouldn't be using more than 50% of your pool otherwise ZFS starts running out of freespace to make fast efficient decisions on where to write data next (issue with all CoW file systems) The rule of thumb is 80% but for realtime data I've seen as estimates of 50% and suggestions of 25% utilization on the drives to give the CoW enough 'grass' to eat. So.... If you need a 4TB pool of space for VMs then you need 4x4TB drives (and a SLOG) . you lose half the space to the mirror and the other half you 'give back' to zfs to stay efficient.

Still have lots of reading to do but I sure have under estimated this bad boy, but the more I learn about FreeNAS the more I realize how so many datacenters have really been provisioned incorrectly on so many levels!
 
Last edited:

Larry C

Cadet
Joined
Feb 3, 2017
Messages
6
Remember with any Raid Z write speeds are going to be the same (if not worse) than a single drive. Even with mirrors, sequential writes (sync=always) I was getting 50MB/s (no SLOG). RaidZ + sync write speeds will be awful. A SLOG isn't magic but its helps you get a more consistent write speed out of your platters. For what its worth i've only been FreeNASing for about a week... but with your platters you will get 90MB/s SEQ write. In Raid Z it will be 80-90 no matter how many drives... stripe/mirror 4 drives 180MB/s. Start doing random writes with sync on RaidZ that number is going wayyyyy down but with a good SLOG and striped mirrors on 4 drives you will probably get close to that 180MB SEQ write max on random writes... otherwise actual throughput could be more like 10MB/s on RaidZ with sync=always (and no SLOG).

I'm sure I'll get chewed up a bit... anyone with more experience feel free to jump in.... But TLDR... A Slog on RaidZ will bring seq sync writes up to MAYBE 90MB/s with your setup (assuming sync=always)... if you don't want to burn space on mirrors a SLOG probably isnt for you.

Actually mine was way worse than that through, using nfs to a linux host i was seeing 70mb/s for my plex pool raidz of 5x4tb drives.

I made an identical pool using an additional 5x4tb drives and connected it to esxi, i was seeing 1mb/s write speeds. Disabling sync i would see 2mb/s

I did the suggestion and made the 4 mirrors and combined them into a raidz and i was seeing about 80mb/s. i disabled sync and now i am getting 120ishmb/sI, though i didnt stress test it much, my slog drive comes tomorrow and I will resume testing after that is put in.


Sent from my iPad using Tapatalk
 

GangstaRIB

Cadet
Joined
Jan 29, 2017
Messages
6
Actually mine was way worse than that through, using nfs to a linux host i was seeing 70mb/s for my plex pool raidz of 5x4tb drives.

I made an identical pool using an additional 5x4tb drives and connected it to esxi, i was seeing 1mb/s write speeds. Disabling sync i would see 2mb/s

I did the suggestion and made the 4 mirrors and combined them into a raidz and i was seeing about 80mb/s. i disabled sync and now i am getting 120ishmb/sI, though i didnt stress test it much, my slog drive comes tomorrow and I will resume testing after that is put in.


Sent from my iPad using Tapatalk
Maybe I misunderstand 'RaidZ mirror' but if I understand you correctly you are not helping yourself. With 5x4TB drives to get the speed you will just have to hang on to one as a spare (not a bad idea anyway) and do a mirror rowxcolumn 2x2 when setting up the pool. Freenas will stripe between the mirrors there is no 'stripe/mirror' setting in the drop down. You will have 4x4TB but 8TB(probably 7.5TB) of volume space. You will want to make your pool 50% of that so ~3.8TB. Then add a SLOG to help speed up the sync writes..... Yes I just turned 16TB (assuming raidZ) of space into less than 4TB and you need to fork out another 300 bucks for a SLOG.... lol

I made all of the same mistakes myself thinking RaidZ was some kind of savior. Awesome for cheap space and even reliable space Z2 for file storage... not so much block storage though. But now I've learned why I always hated SANs and prefered DAS... well it's because people haven't really been doing things properly for real time applications. Now they use enterprise 15k SAS drives, etc but it only hides the problem whereas out consumer hard drives it sticks out like a sore thumb. (raid5,6 or Z in our case) Remember our $100 platters are only going to crank out about 90MB SEQ write speeds on a good day and 180MB read. There are some that do better but that seems to be the going rate based on a consumer price.
 

Larry C

Cadet
Joined
Feb 3, 2017
Messages
6
Maybe I misunderstand 'RaidZ mirror' but if I understand you correctly you are not helping yourself. With 5x4TB drives to get the speed you will just have to hang on to one as a spare (not a bad idea anyway) and do a mirror rowxcolumn 2x2 when setting up the pool. Freenas will stripe between the mirrors there is no 'stripe/mirror' setting in the drop down. You will have 4x4TB but 8TB(probably 7.5TB) of volume space. You will want to make your pool 50% of that so ~3.8TB. Then add a SLOG to help speed up the sync writes..... Yes I just turned 16TB (assuming raidZ) of space into less than 4TB and you need to fork out another 300 bucks for a SLOG.... lol

I made all of the same mistakes myself thinking RaidZ was some kind of savior. Awesome for cheap space and even reliable space Z2 for file storage... not so much block storage though. But now I've learned why I always hated SANs and prefered DAS... well it's because people haven't really been doing things properly for real time applications. Now they use enterprise 15k SAS drives, etc but it only hides the problem whereas out consumer hard drives it sticks out like a sore thumb. (raid5,6 or Z in our case) Remember our $100 platters are only going to crank out about 90MB SEQ write speeds on a good day and 180MB read. There are some that do better but that seems to be the going rate based on a consumer price.


My bad I mis-spoke, I took 4x4tb and 4x 2tb drives and made 4 mirrors and then turned them into a pool that is 10tb in size.

My slog drive was only $170 for a 400gb intel dc s3700

im not looking for amazing speeds just something manageable for running a few vms if it gets to be too slow I may just sell all the drives and replace them with ssds.
 
Status
Not open for further replies.
Top