Pure NVMe pool tuning

Status
Not open for further replies.

vrod

Dabbler
Joined
Mar 14, 2016
Messages
39
Hi all,

I realize this is an unusual question but I am currently in the process of building a pure NVMe-based ZFS pool with freenas. My specs are these:

- Xeon E5-2660v2
- 128GB DDR3 1600Mhz
- Redundant 620W PSU w/ UPS in datacenter
- Dual port QLE8152 10GbE sfp+ NIC
- 4x Intel 750 NVMe SSD's
- Other: 4x6tb wd red and 2x2tb hgst ultrastar in seperate pools

At the moment I am still waiting to be able to connect the last 2 750 SSD's as they are the U.2-based ones and not AIC's. For now I have gone with a raid0 with the first 2 one. Before commenting about the raid0, I wanna inform that it's solely for my own vmware stuff, with hourly replications to a second zfs pool. No business stuff here.

So the question in all this is: how can I best optimize NVMe usage with zfs? I am using the pool with iSCSI mpio (2x10GbE) but only achieve 25-30% performance with Anvil Benchmark through a VM compared to if I attach one directly to my computer. I do know the 750's aren't optimized for server usage as well as ZFS has its overhead but is there something I can do to make it better?

The VM i am testing on are using the recommended PVSCSI controller and I was able to yield 80-90% performance of a samsung 950 ssd running on a local VMFS6 datastore. Using Atto Disk Mark I yield about 1GB/s write, however only up to 500MB/s read. These SSD's are better at reading than writing so could this be because of the ARC?

Any help would greatly be appreciated. :)

EDIT: thought it might help if I paste the zpool list output of the pool.

Code:
NAME  SIZE  ALLOC  FREE  EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
NVMe  1.45T  157G  1.30T  -  7%  10%  1.00x  ONLINE  /mnt
 
Last edited:

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Hi all,

I realize this is an unusual question but I am currently in the process of building a pure NVMe-based ZFS pool with freenas. My specs are these:

- Xeon E5-2660v2
- 128GB DDR3 1600Mhz
- Redundant 620W PSU w/ UPS in datacenter
- Dual port QLE8152 10GbE sfp+ NIC
- 4x Intel 750 NVMe SSD's
- Other: 4x6tb wd red and 2x2tb hgst ultrastar in seperate pools

At the moment I am still waiting to be able to connect the last 2 750 SSD's as they are the U.2-based ones and not AIC's. For now I have gone with a raid0 with the first 2 one. Before commenting about the raid0, I wanna inform that it's solely for my own vmware stuff, with hourly replications to a second zfs pool. No business stuff here.

So the question in all this is: how can I best optimize NVMe usage with zfs? I am using the pool with iSCSI mpio (2x10GbE) but only achieve 25-30% performance with Anvil Benchmark through a VM compared to if I attach one directly to my computer. I do know the 750's aren't optimized for server usage as well as ZFS has its overhead but is there something I can do to make it better?

The VM i am testing on are using the recommended PVSCSI controller and I was able to yield 80-90% performance of a samsung 950 ssd running on a local VMFS6 datastore. Using Atto Disk Mark I yield about 1GB/s write, however only up to 500MB/s read. These SSD's are better at reading than writing so could this be because of the ARC?

Any help would greatly be appreciated. :)

I don't see the version of FreeNAS you're running. I can say we've seen some performance issues exposed with 10G NICs + all NVMe flash pools in 9.10. I can't get past about 3Gb/s read or write in my lab.

They seem to clear up in FreeBSD 11 based FreeNASs (eg Corral, 11). I haven't put a lot of effort into trying to run this down as the issues don't affect our production boxes (yet) and I was waiting for the new release to get out and stabilize since it seems to fix the issue..
 
Status
Not open for further replies.
Top