NFS vs Wirespeed Performance

Status
Not open for further replies.

deasmi

Dabbler
Joined
Mar 21, 2013
Messages
14
I have recently replaced my ageing three box NAS and VMware cluster with a single All in One solution.

Tyan S5512 Motherboard
Intel Xeon 1240v2
32GB RAM
IBM M1015 SAS/SATA Card in IT mode
4x500GB WD RE2 drives
2x3TB WD Red drives

I am running Freenas on a 6GB VM with the M1015 passed through as VT-d, ie. native access. Network interfaces are vmxnet3, although I can replace with VT-d intel card.

I have the following zfs pools.

Code:
freenas# zpool status
  pool: vol1
 state: ONLINE
  scan: none requested
config:

	NAME                                            STATE     READ WRITE CKSUM
	vol1                                            ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/f3c13654-844d-11e2-b543-005056b32aa7  ONLINE       0     0     0
	    gptid/f4087e3d-844d-11e2-b543-005056b32aa7  ONLINE       0     0     0
	  mirror-1                                      ONLINE       0     0     0
	    gptid/0e55d215-844e-11e2-b543-005056b32aa7  ONLINE       0     0     0
	    gptid/0e8dfc82-844e-11e2-b543-005056b32aa7  ONLINE       0     0     0
	logs
	  da1                                           ONLINE       0     0     0
	cache
	  da2                                           ONLINE       0     0     0

errors: No known data errors

  pool: vol2
 state: ONLINE
  scan: none requested
config:

	NAME                                            STATE     READ WRITE CKSUM
	vol2                                            ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/2063f45f-85c6-11e2-939e-005056b32aa7  ONLINE       0     0     0
	    gptid/20c42e8a-85c6-11e2-939e-005056b32aa7  ONLINE       0     0     0

errors: No known data errors


vol1 L2ARC is a 12GB SSD vmware volume
vol1 ZIL is a 8GB SSD vmware volume


Testing to vol1 with a 10GB dd write I am seeing approx 160MB/s write performance, which seems reasonable to me, not sure on RE3 drive performance in this arrangement.

Running tests from a Mac Pro connected on Gig ethernet I get 980 mbps throughput with iperf.

Running the same dd write over NFS I am getting 65MB/s write performance, which seems a little low, but to be honest I'm not sure how much overhead NFS add, however that equates to 520mbps raw.

Before I start down the rabbit hole of tuning I just wondered if I'm likely to see any large (20%+) improvements to this figure or that's what I should be expecting with NFS.

Thanks in advance.

Edit: Spelling
 

deasmi

Dabbler
Joined
Mar 21, 2013
Messages
14
So to avoid this being RAM related, I know 8GB is the recommended minimum, I've rerun the tests with 16GB of RAM with the same result.

CIFS access is giving me approx 50MB/s.

This seem low when native disks are capable of 160MB/s with sequential writes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I don't see any obvious mistakes. NFS with VMware is always a little dicey because VMware peppers all its writes with sync. Try setting up a FreeBSD or Linux guest (or from the Mac Pro) to access the NFS and see if that blows faster, especially without massive sync writes.

Another good way to see if it is a pool thing would be to take a SSD volume and export that and try your CIFS and NFS on that. It just helps to start excluding possibilities.
 

deasmi

Dabbler
Joined
Mar 21, 2013
Messages
14
Having played around a bit more one thing I tried was disabling the zpool sync for the dataset.

Wham, 108MB/s stable as you like from the Mac Pro.

This isn't quite what I was expecting with an 8GB dedicated SSD log device.

So I need to go and do some performance checking on the log device I think, just checked without the log device and it's slower, about 40 MB/s do it is helping, just not a much as I thought it would.

Anyway, at least I know where I'm looking now.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The ZIL and L2ARC really don't help much unless you have a zpool that is extremely busy all of the time(think HDD LED on solid all of the time). Also there seems to be some performance penalties when using ESXi that range from minor to extreme depending on several factors.
 
Status
Not open for further replies.
Top