deasmi
Dabbler
- Joined
- Mar 21, 2013
- Messages
- 14
I have recently replaced my ageing three box NAS and VMware cluster with a single All in One solution.
Tyan S5512 Motherboard
Intel Xeon 1240v2
32GB RAM
IBM M1015 SAS/SATA Card in IT mode
4x500GB WD RE2 drives
2x3TB WD Red drives
I am running Freenas on a 6GB VM with the M1015 passed through as VT-d, ie. native access. Network interfaces are vmxnet3, although I can replace with VT-d intel card.
I have the following zfs pools.
vol1 L2ARC is a 12GB SSD vmware volume
vol1 ZIL is a 8GB SSD vmware volume
Testing to vol1 with a 10GB dd write I am seeing approx 160MB/s write performance, which seems reasonable to me, not sure on RE3 drive performance in this arrangement.
Running tests from a Mac Pro connected on Gig ethernet I get 980 mbps throughput with iperf.
Running the same dd write over NFS I am getting 65MB/s write performance, which seems a little low, but to be honest I'm not sure how much overhead NFS add, however that equates to 520mbps raw.
Before I start down the rabbit hole of tuning I just wondered if I'm likely to see any large (20%+) improvements to this figure or that's what I should be expecting with NFS.
Thanks in advance.
Edit: Spelling
Tyan S5512 Motherboard
Intel Xeon 1240v2
32GB RAM
IBM M1015 SAS/SATA Card in IT mode
4x500GB WD RE2 drives
2x3TB WD Red drives
I am running Freenas on a 6GB VM with the M1015 passed through as VT-d, ie. native access. Network interfaces are vmxnet3, although I can replace with VT-d intel card.
I have the following zfs pools.
Code:
freenas# zpool status pool: vol1 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM vol1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/f3c13654-844d-11e2-b543-005056b32aa7 ONLINE 0 0 0 gptid/f4087e3d-844d-11e2-b543-005056b32aa7 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/0e55d215-844e-11e2-b543-005056b32aa7 ONLINE 0 0 0 gptid/0e8dfc82-844e-11e2-b543-005056b32aa7 ONLINE 0 0 0 logs da1 ONLINE 0 0 0 cache da2 ONLINE 0 0 0 errors: No known data errors pool: vol2 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM vol2 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/2063f45f-85c6-11e2-939e-005056b32aa7 ONLINE 0 0 0 gptid/20c42e8a-85c6-11e2-939e-005056b32aa7 ONLINE 0 0 0 errors: No known data errors
vol1 L2ARC is a 12GB SSD vmware volume
vol1 ZIL is a 8GB SSD vmware volume
Testing to vol1 with a 10GB dd write I am seeing approx 160MB/s write performance, which seems reasonable to me, not sure on RE3 drive performance in this arrangement.
Running tests from a Mac Pro connected on Gig ethernet I get 980 mbps throughput with iperf.
Running the same dd write over NFS I am getting 65MB/s write performance, which seems a little low, but to be honest I'm not sure how much overhead NFS add, however that equates to 520mbps raw.
Before I start down the rabbit hole of tuning I just wondered if I'm likely to see any large (20%+) improvements to this figure or that's what I should be expecting with NFS.
Thanks in advance.
Edit: Spelling