Question: Is there a bug with NFS shares on a SSD volume in FreeNAS 11u2?
Specs:
E3-1225 v3
12gb
3x 500gb SSD in a stripe, 2 port intel 1gb NIC.
Testing:
Installed CentOS 7, installed ZFS, imported SSD1 pool, created NFS share and mounted to the ESXi host.
900mbit/s in both directions.
Now to throw a monkey wrench into this, I have a second system running FreeNAS-11.0u2 with 5x 4TB in RaidZ1 with a SSD SLOG device. Using a NFS share on the same ESXi host, able to hit 900mbit/s in both directions.
Additional information:
dd if=/dev/zero of=/volume/dataset/tmp.dat bs=2048k count=50k (used for testing, compression turned off)
SSD1 DD test: 1.2gB/s write, 1.6gB/s read
RaidZ1 DD test: ~400mB/s write and read
Specs:
E3-1225 v3
12gb
3x 500gb SSD in a stripe, 2 port intel 1gb NIC.
Testing:
- Installed a clean copy of FreeNAS-11.0-U2, created a volume with the SSDs in a stripe. No SLOG or L2ARC setup.
- Created a dataset "test" (/mnt/SSD1/test)
- Created a NFS share set to mount as root
- Connected to an ESXi host (managed by vcenter) directly. Added NFS share (v3).
- Moved a virtual machine to the new share, write speeds varied from 100-300mbit/s.
- Moved the virtual machine from the new share back, averaged around 900mbit/s.
- Decided it was the whole sync writes issue, so I disabled sync writes and made no difference.
- Deleted NFS share and created iSCSI - got 900mbit/s in both directions.
Installed CentOS 7, installed ZFS, imported SSD1 pool, created NFS share and mounted to the ESXi host.
900mbit/s in both directions.
Now to throw a monkey wrench into this, I have a second system running FreeNAS-11.0u2 with 5x 4TB in RaidZ1 with a SSD SLOG device. Using a NFS share on the same ESXi host, able to hit 900mbit/s in both directions.
Additional information:
dd if=/dev/zero of=/volume/dataset/tmp.dat bs=2048k count=50k (used for testing, compression turned off)
SSD1 DD test: 1.2gB/s write, 1.6gB/s read
RaidZ1 DD test: ~400mB/s write and read