I have a spare workstation that I've built with 8x 15k SAS disks and no matter what I do I can't seem to get more throughput than 60MB/s. I get the same performance regardless of how many disks are in the pool. What is wierd is that I have 2 FreeNAS servers and my other server that is built with decent hardware has the same issue.
Specs (Test Workstation)
FreeNAS 9.10.2 U6
Server: HP xw9400 Workstation
CPU: 2x dual core AMD Opteron 2220 2.8GHz
RAM: 24GB DDR2 ECC
HDD: 8x Seagate 150GB 15k SAS
Controller: Integrated SAS Controller
Specs (Lab Storage Server)
FreeNAS 11 U3
Server: IBM X3500
CPU: 2x quad core Intel E5440 2.8GHz
RAM: 64GB DDR2 ECC
HDD: 8x WD Caviar Green 7200rpm 2TB
Controller: M1015 (IT Mode)
This appears to me to be a bottleneck on the FreeNAS server. The dd command seems to show the potential of the disk pool but even copying files locally on the FreeNAS server, I can't get anything over 60MB/s. Are there any other tests that I can do to narrow down the issue? The disks don't appear to be busy, the CPU isn't consumed, i'm not worried about network or external access while testing locally on the server. The only thing left is RAM, which I can't imagine would be limiting to 60MB/s.
Any help?
Specs (Test Workstation)
FreeNAS 9.10.2 U6
Server: HP xw9400 Workstation
CPU: 2x dual core AMD Opteron 2220 2.8GHz
RAM: 24GB DDR2 ECC
HDD: 8x Seagate 150GB 15k SAS
Controller: Integrated SAS Controller
Specs (Lab Storage Server)
FreeNAS 11 U3
Server: IBM X3500
CPU: 2x quad core Intel E5440 2.8GHz
RAM: 64GB DDR2 ECC
HDD: 8x WD Caviar Green 7200rpm 2TB
Controller: M1015 (IT Mode)
- I have built the zpool with both 4xMirrored vdevs and also a RAIDZ vdev. Both starting with 2 disks and testing performance, then adding another 2 disks in a new vdev and testing again. At no time have I seen an increase in performance or exceeded 60MB/s throughput.
- I have tested with "dd if=/dev/zero of=testfile bs=128k count=50000". I have changed the block sizes and I get some outstanding results, but nothing seems to be reflected in the real world. This test doesn't seem to give any indicative results. It creates a 6GB file and reports back with (965997729 bytes/sec).
- If I execute "dd if=/dev/zero of=/dev/null bs=128k count=50000" it reports back a result of (3030716917 bytes/sec).
- After I have used the "dd" command to create a 6GB testfile, I run "rsync --info=progress2 testfile testfile_new", which is just making a copy of the same file into the same location, the rsync progress only reaches 60MB/s transfer speed.
- While performing both the "dd" and "rsync" tests, I monitored with "zpool iostat 1". This also reported back a maximum of 60MB/s
- I have checked the CPU consumption and it stays consistently at 90% idle.
- I also monitored the disk busy status with "iostat -xw 1" and the %b figure was well below 40% at all times.
- I created an iSCSI extent (using a zvol) and connected the storage to my ESXi host. I cloned a VM to the new storage and the write operation maxed out at 60MB/s according to both iostat, the ESXi monitoring GUI and esxtop.
- I launched the VM on the new storage (Windows Server) and proceeded to copy a 3GB file to another location on the same HDD and Windows reported this operations throughput at 40-50MB/s.
- Lastly, I attached 3x 1TB 7200rpm disks to my Lab Storage Server and did the same tests. I was shocked when it also returned the exact same results. It can't seem to provide me access to my storage at anything more than 60MB/s.
This appears to me to be a bottleneck on the FreeNAS server. The dd command seems to show the potential of the disk pool but even copying files locally on the FreeNAS server, I can't get anything over 60MB/s. Are there any other tests that I can do to narrow down the issue? The disks don't appear to be busy, the CPU isn't consumed, i'm not worried about network or external access while testing locally on the server. The only thing left is RAM, which I can't imagine would be limiting to 60MB/s.
Any help?