Hi all,
I realize this is an unusual question but I am currently in the process of building a pure NVMe-based ZFS pool with freenas. My specs are these:
- Xeon E5-2660v2
- 128GB DDR3 1600Mhz
- Redundant 620W PSU w/ UPS in datacenter
- Dual port QLE8152 10GbE sfp+ NIC
- 4x Intel 750 NVMe SSD's
- Other: 4x6tb wd red and 2x2tb hgst ultrastar in seperate pools
At the moment I am still waiting to be able to connect the last 2 750 SSD's as they are the U.2-based ones and not AIC's. For now I have gone with a raid0 with the first 2 one. Before commenting about the raid0, I wanna inform that it's solely for my own vmware stuff, with hourly replications to a second zfs pool. No business stuff here.
So the question in all this is: how can I best optimize NVMe usage with zfs? I am using the pool with iSCSI mpio (2x10GbE) but only achieve 25-30% performance with Anvil Benchmark through a VM compared to if I attach one directly to my computer. I do know the 750's aren't optimized for server usage as well as ZFS has its overhead but is there something I can do to make it better?
The VM i am testing on are using the recommended PVSCSI controller and I was able to yield 80-90% performance of a samsung 950 ssd running on a local VMFS6 datastore. Using Atto Disk Mark I yield about 1GB/s write, however only up to 500MB/s read. These SSD's are better at reading than writing so could this be because of the ARC?
Any help would greatly be appreciated. :)
EDIT: thought it might help if I paste the zpool list output of the pool.
I realize this is an unusual question but I am currently in the process of building a pure NVMe-based ZFS pool with freenas. My specs are these:
- Xeon E5-2660v2
- 128GB DDR3 1600Mhz
- Redundant 620W PSU w/ UPS in datacenter
- Dual port QLE8152 10GbE sfp+ NIC
- 4x Intel 750 NVMe SSD's
- Other: 4x6tb wd red and 2x2tb hgst ultrastar in seperate pools
At the moment I am still waiting to be able to connect the last 2 750 SSD's as they are the U.2-based ones and not AIC's. For now I have gone with a raid0 with the first 2 one. Before commenting about the raid0, I wanna inform that it's solely for my own vmware stuff, with hourly replications to a second zfs pool. No business stuff here.
So the question in all this is: how can I best optimize NVMe usage with zfs? I am using the pool with iSCSI mpio (2x10GbE) but only achieve 25-30% performance with Anvil Benchmark through a VM compared to if I attach one directly to my computer. I do know the 750's aren't optimized for server usage as well as ZFS has its overhead but is there something I can do to make it better?
The VM i am testing on are using the recommended PVSCSI controller and I was able to yield 80-90% performance of a samsung 950 ssd running on a local VMFS6 datastore. Using Atto Disk Mark I yield about 1GB/s write, however only up to 500MB/s read. These SSD's are better at reading than writing so could this be because of the ARC?
Any help would greatly be appreciated. :)
EDIT: thought it might help if I paste the zpool list output of the pool.
Code:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT NVMe 1.45T 157G 1.30T - 7% 10% 1.00x ONLINE /mnt
Last edited: