I'm trying to understand the performance of my server, and I'm totally baffled by my results.
I built this server years ago for the experience, and recently thought about upgrading due to poor performance, so I wanted to see if there was anything I could do to stretch its life.
The SATA was not working on my mobo and I was using a PCI card. Upgrading the BIOS on my motherboard fixed that, and moving to them drastically improved the performance. Since then I’ve been runnings tests on every combination I can think of.
My setup is 3 1TB 7200RPM Samsung drives, in RAIDZ. I was able to pull an SSD after upgrading my Desktop, I made a 2GB partition and added it as the SLOG. I plan to use the server for ESXi, so SYNC writes will be common, probably not necessary for my setup, but I already have the drive. This system, as old as it is, is on a Dual-Core AMD 64 w/8GB.
SYNC
-DISABLED
-ALWAYS
PROTOCOLS
-LOCAL
-NFS
-CIFS
-iSCSI
The file sharing protocols perform more or less the same so I will lump them together. For these tests, I’m just providing typical ‘dd’ results since they’re pretty typical for all the iozone tests I’ve run. I can also confirm these results from watching ‘zpool iostat -v’.
Locally
ASYNC 180 MB/s
SYNC 58 MB/s
File Sharing
ASYNC 34 MB/s
SYNC 14 MB/s
iSCSI
ASYNC 39 MB/s
SYNC 10 MB/s
If I run ‘iperf’, I basically hit the practical max: 938Mbps
I know this is not a fast pool, but it’s significantly faster than what I was using when I was on the PCI card. Based on the local results, I feel like I should be capable of much better network results. These are tolerable for me for the foreseeable future until a proper upgrade is possible.
I’ve run the tests (that I could) from a Mac and from from a VM guest (the guest is installed to local storage, I created 2 drives ‘async’ & ‘sync’ and ran dd directly on them sdb/sdc for the iSCSI tests). I’ve made sure to use Intel NICS on both machines, but that made no noticeable difference.
My question is though: Why are the SYNC writes over the network so terrible? As much as I’ve read, there’s nothing that seems to indicate that SYNC writes require any more overhead. Again, I’m able to test with a SLOG and watching ‘zpool iostat -v’ I can see that the ‘log’ device is being written at ~10MB/s. And the system is clearly capable of more due to the local tests, so even given the memory constraints, I think it should do better.
So, please explain to me a) What I’m doing wrong? (if anything) b) What I’m missing about FreeNAS and/or ZFS that would result in such bad networked SYNC writes?
I built this server years ago for the experience, and recently thought about upgrading due to poor performance, so I wanted to see if there was anything I could do to stretch its life.
The SATA was not working on my mobo and I was using a PCI card. Upgrading the BIOS on my motherboard fixed that, and moving to them drastically improved the performance. Since then I’ve been runnings tests on every combination I can think of.
My setup is 3 1TB 7200RPM Samsung drives, in RAIDZ. I was able to pull an SSD after upgrading my Desktop, I made a 2GB partition and added it as the SLOG. I plan to use the server for ESXi, so SYNC writes will be common, probably not necessary for my setup, but I already have the drive. This system, as old as it is, is on a Dual-Core AMD 64 w/8GB.
SYNC
-DISABLED
-ALWAYS
PROTOCOLS
-LOCAL
-NFS
-CIFS
-iSCSI
The file sharing protocols perform more or less the same so I will lump them together. For these tests, I’m just providing typical ‘dd’ results since they’re pretty typical for all the iozone tests I’ve run. I can also confirm these results from watching ‘zpool iostat -v’.
Locally
ASYNC 180 MB/s
SYNC 58 MB/s
File Sharing
ASYNC 34 MB/s
SYNC 14 MB/s
iSCSI
ASYNC 39 MB/s
SYNC 10 MB/s
If I run ‘iperf’, I basically hit the practical max: 938Mbps
I know this is not a fast pool, but it’s significantly faster than what I was using when I was on the PCI card. Based on the local results, I feel like I should be capable of much better network results. These are tolerable for me for the foreseeable future until a proper upgrade is possible.
I’ve run the tests (that I could) from a Mac and from from a VM guest (the guest is installed to local storage, I created 2 drives ‘async’ & ‘sync’ and ran dd directly on them sdb/sdc for the iSCSI tests). I’ve made sure to use Intel NICS on both machines, but that made no noticeable difference.
My question is though: Why are the SYNC writes over the network so terrible? As much as I’ve read, there’s nothing that seems to indicate that SYNC writes require any more overhead. Again, I’m able to test with a SLOG and watching ‘zpool iostat -v’ I can see that the ‘log’ device is being written at ~10MB/s. And the system is clearly capable of more due to the local tests, so even given the memory constraints, I think it should do better.
So, please explain to me a) What I’m doing wrong? (if anything) b) What I’m missing about FreeNAS and/or ZFS that would result in such bad networked SYNC writes?