Pascal Robert
Dabbler
- Joined
- Apr 20, 2015
- Messages
- 14
Hi,
I'm trying to use FreeNAS as a NFS server for our Citrix XenServer 7.1 cluster. Sadly, performance is not optimal, and I'm trying to find out, why.
Hardware for FreeNAS
- SuperMicro 6027R-E1CR12N
- Disks:
- 8 Seagate Entreprises HDD (4 TB each)
- 4 Intel S3500 SSD 480 GB, two for ZIL, two for cache
- 2 Intel 80 GB disks in mirror, for FreeNAS
- Intel X550-T2 as the 10 Gbps network card
- NetGear 10 Gbps switch between the FreeNAS box and the 3 XenServer hosts
Setup:
I did some tests with Bonnie++ on a Ubuntu box connected directly to the FreeNAS box and mounting the NFS share on the Ubuntu box, but by 1 Gbps instead of 10 Gbps. I also ran bonnie++ on a XenServer host, connected to the switch at 10 Gbps.
Result:
Ubuntu box (NFS, 1 Gbps direct to the NAS):
XenServer 7.1 (NFS, 10 Gbps, mtu 9000, NetGear 10 Gbps switch in between):
bonnie++ on the FreeNAS box, in a jail:
When running the tests directly on the FreeNAS, the ZIL devices uses up to 227.9 MBps. When doing it by NFS, it goes to 191.9 MBps. So why the bonnie++ test show such a big difference while the ZIL devices usage is not that different?
I'm trying to use FreeNAS as a NFS server for our Citrix XenServer 7.1 cluster. Sadly, performance is not optimal, and I'm trying to find out, why.
Hardware for FreeNAS
- SuperMicro 6027R-E1CR12N
- Disks:
- 8 Seagate Entreprises HDD (4 TB each)
- 4 Intel S3500 SSD 480 GB, two for ZIL, two for cache
- 2 Intel 80 GB disks in mirror, for FreeNAS
- Intel X550-T2 as the 10 Gbps network card
- NetGear 10 Gbps switch between the FreeNAS box and the 3 XenServer hosts
Setup:
Code:
[root@storix1] ~# zpool status pool: Stockage state: ONLINE scan: scrub repaired 0 in 11h20m with 0 errors on Sun Nov 26 11:21:00 2017 config: NAME STATE READ WRITE CKSUM Stockage ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/b593a60e-1563-11e7-952e-a0369fbd5878 ONLINE 0 0 0 gptid/c5b6a903-1563-11e7-952e-a0369fbd5878 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/d203bc54-1563-11e7-952e-a0369fbd5878 ONLINE 0 0 0 gptid/de3f47dd-1563-11e7-952e-a0369fbd5878 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/eec49d66-1563-11e7-952e-a0369fbd5878 ONLINE 0 0 0 gptid/f9dd0243-1563-11e7-952e-a0369fbd5878 ONLINE 0 0 0 mirror-3 ONLINE 0 0 0 gptid/0a3f2895-1564-11e7-952e-a0369fbd5878 ONLINE 0 0 0 gptid/1ae06ada-1564-11e7-952e-a0369fbd5878 ONLINE 0 0 0 logs mirror-4 ONLINE 0 0 0 gptid/1cbe95b1-1564-11e7-952e-a0369fbd5878 ONLINE 0 0 0 gptid/1d0dae82-1564-11e7-952e-a0369fbd5878 ONLINE 0 0 0 cache gptid/26381152-1564-11e7-952e-a0369fbd5878 ONLINE 0 0 0 gptid/30e76545-1564-11e7-952e-a0369fbd5878 ONLINE 0 0 0
I did some tests with Bonnie++ on a Ubuntu box connected directly to the FreeNAS box and mounting the NFS share on the Ubuntu box, but by 1 Gbps instead of 10 Gbps. I also ran bonnie++ on a XenServer host, connected to the switch at 10 Gbps.
Result:
Ubuntu box (NFS, 1 Gbps direct to the NAS):
Code:
# bonnie++ -d /mnt/tests/ -s 128G -n 0 -m TEST -f -b -u nobody Using uid:65534, gid:65534. Writing intelligently...done Rewriting...done Reading intelligently...done start 'em...done...done...done...done...done... Version 1.97 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP TEST 128G 112771 8 12947 2 114593 8 62.2 17 Latency 874ms 251s 282ms 4389ms
XenServer 7.1 (NFS, 10 Gbps, mtu 9000, NetGear 10 Gbps switch in between):
Code:
# bonnie++ -s 180G -d /run/sr-mount/2dd11327-af36-1d5c-40e6-6019c203c6ce -n 0 -m virtuix2 -f -b -u nobody Using uid:99, gid:99. Writing intelligently...done Rewriting...done Reading intelligently...done start 'em...done...done...done...done...done... Version 1.97 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP virtuix2 180G 188109 43 94173 33 517105 60 204.0 24 Latency 255ms 2482ms 77923us 942ms
bonnie++ on the FreeNAS box, in a jail:
Code:
root@Bonnie:/ # bonnie++ -s 200G -d /var/ -n 0 -m freenas -f -b -u root Using uid:0, gid:0. Writing intelligently...done Rewriting...done Reading intelligently...done start 'em...done...done...done...done...done... Version 1.97 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP freenas 200G 556268 85 414935 92 1227872 95 332.5 12 Latency 7226ms 178ms 53926us 275ms
When running the tests directly on the FreeNAS, the ZIL devices uses up to 227.9 MBps. When doing it by NFS, it goes to 191.9 MBps. So why the bonnie++ test show such a big difference while the ZIL devices usage is not that different?