Write speed performance issues

Status
Not open for further replies.

PlowHouse

Dabbler
Joined
Dec 10, 2012
Messages
23
I'm running into a performance issue with my newly created FreeNAS 8.3 system. I have 10 - 750 GB SATA hard drives @7200 RPM and 4GB RAM in this one system. I used ZFS as my file system and created a Raidz2. So I have 7 active drives with 2 parity drives leaving my 10th drive marked as a spare. When I copy files from a Windows machine to my configured NAS I am only getting speeds of 2.5MB - 5MB/s. I have prefetching disabled and I have tried to set the ARC as best as I could compared to other systems that people have related to mine but it seems that I'm stuck at those terrible speeds.

The hardware I'm using is about 5 years old, standard motherboard with an Intel Processor that is 64bit. So it's not the greatest of systems but in my opinion the hardware should be able to at least produce writing at 30MB/s. Maybe ZFS was the wrong choice for this kind of setup but I'd like to hear back from some people first before I blow away the raid and try again with some different settings such as using UFS instead.

Any ideas to increase this performance would be greatly appreciated.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ZFS can thrive on memory, and your 4GB is tight. Also, RAIDZ2 is inherently a bit slow. Your choice to use a non-optimal number of disks, especially when you have an optimal number available, is not helping. Use 8 active drives with two parity. It may help to force 4096 byte sector size (this doesn't have as much to do with the hardware as it's made to sound). Enable autotune to help pick some better settings.

So anyways, back in 2005, we built some really nice 1U storage servers, Opteron 240EE's on a great server board, 8GB RAM, able to shovel data around at multigigabit speeds. When FreeNAS 8 was being developed, I decided to start pursuing that as a possible future storage platform. I expected a performance hit, but it was substantially worse than expected. It got better with faster disks, but not a lot better. Then I eventually threw faster hardware at one of the boxes and all hell broke loose. This is outlined in bug 1531, and you would be well advised to look at comment 14 there for some things to try.

It might be a good idea to get an idea of what your system is capable of. Start off by doing a UFS mirror of two disks. Benchmark it. Take that apart. Do a ZFS mirror of two disks. Benchmark that. That difference is the ZFS penalty right there, and can be pretty large in my experience. Now build the array the way you want it and benchmark THAT. The difference is the RAIDZ2 penalty.
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
I would try jgreco's benchmark suggestion as well. I would also verify that ZFS compression & deduplication are turned off.

Something you didn't mention is read speeds from the NAS. I found that writes were very processor intensive, but reads not so much. It was easy for me to setup a system with good read speeds, even with marginal hardware.

Your system does sound like it's more on the performance margin, but in my tests, I found that block size and optimal number of drives had a nominal impact on performance. I would be curious to know how those settings impact your system.
 

PlowHouse

Dabbler
Joined
Dec 10, 2012
Messages
23
Thank you for the quick replies. After looking over the system last night and trying some other test methods such as auto tune and making sure dedup is off I decided to look inside at the hardware to make sure none of my drives had a jumper on them set for 1.5 gbps instead of the full 3.0 gbps. One thing I forgot to mention in this post is that I am using a 4 port sata card with my setup as the motherboard would only allow me to utilize 6 drives. I did a few Google searches on the card and found out it is a 1.5 gbps sata "raid" card.

I'm going to back up my data tonight and then within the next few days I'm going to rebuild the array without that card and see how my test results turn out using UFS and ZFS.

I will reply back to this post with my results.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
That's one way. Or you could just get a (supported) 3Gbps or 6Gbps SATA card if you have an open PCIe slot. It's always a good idea to have a fresh backup, but you probably would be able to just take out the old card, plug in the new one, attach the drives, and be off to the races. It sounds like you were using JBOD mode on your RAID card and letting FreeNAS handle the RAID/drives. Easily replacing cards is one of the many benefits of doing that.
 

PlowHouse

Dabbler
Joined
Dec 10, 2012
Messages
23
@Stephens,
I believe that is what I'm going to do. Might as well get as much use out of the extra drives I have, so I purchased the same card but at 3.0 Gp/s instead of the one I had lying around at 1.5. The card I bought is only $35 on Newegg so if I still run into issues it won't be a total loss.
 
Status
Not open for further replies.
Top