I've searched. I've tried. I need help. CIF share is SLOW and something will stop

Status
Not open for further replies.

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
I put together a FreeNAS 8.2 box to use as a back up for my Windows machines, an SSH server and a storage device for my movies, photos and music. I've been messing with it for a few weeks now trying to get the transfer speeds up from the Windows box to the NAS but I can't seem to make any headway. At first I was getting a burst of 30MB/s speeds followed by 5MB/s, which seems to be a common thing from my Google searches. I found a few post and tried a few things and they only made things worse. Now I get 94MB/s burst initially and then it dies and won't complete the transfer. So I'm reverting back to the pre-tweaked settings in hopes that you can help me solve this.

Here's my setup:

FreeNAS box
Gigabyte 870A-USB3 motherboard
Dell PERC 6/3 RAID controller with 512mb cache, BBU, latest Dell Firmware installed, 20cfm fan mounted to heatsink to cool it, installed in a PCI 16x slot
16gigs of dual channel DDR3 memory
Intel PRO/1000 GT NIC in PCI slot
6 WD Red 1TB NAS drives

The drives are set up in RAID 6 on the PERC card and then I created a ZFS volume on that.

The Windows box
Dell XPS 420
8gigs of RAM
750GB drive
500GB drive
1.5TB drive
Built-in Intel 82566DC-2 Gigabit NIC
Win 7 Ultimate 64bit

Network between the two machines consist of Cat5e patch cables going into a Cisco SG100-08 switch. Eventually everything will be going through my home lab with consist of managed switches, routers and a Poweredge server. But I wanted to get this thing working right before I put it in the network.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
- Use iperf on FreeNAS (it's included) and jperf on your windows machine to get the networking going right without having to worry about anything else. You should be able to get gigabit speeds. If not, you have work to do. If so, we can go from there.

- Look on the forums for "dd" commands to test the speed of your zpool. It should be fine, but when things don't work, there's no sense in assuming anything. Speaking of which, why did you decide to let a card do the RAID support instead of FreeNAS (as the documentation recommendfs)?

- What's the output of "zpool status" from the shell?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'd recommend you setup the hard drives as individual disks and let ZFS handle the "RAID-ing" with a RAIDZ2. That is, unless you have a VERY compelling reason to stay with hardware RAID. Section 1.1.6 of the FreeNAS manual specifically discusses hardware versus software and it can backfire drastically if you use a hardware raid. Alot of people try to "repurpose" FreeNAS to work for them and ignore the manual's recommendations. It's alot easier and pain-free to use FreeNAS in the way it was designed.

On a related note ZFS has it's own caching system, and there can be performance conflicts with using both at the same time. I found out that disabling the read cache on the RAID controller increased my performance by 20%. Your issue isn't likely to be related to your RAID controller cache though.
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
- Use iperf on FreeNAS (it's included) and jperf on your windows machine to get the networking going right without having to worry about anything else. You should be able to get gigabit speeds. If not, you have work to do. If so, we can go from there.

- Look on the forums for "dd" commands to test the speed of your zpool. It should be fine, but when things don't work, there's no sense in assuming anything. Speaking of which, why did you decide to let a card do the RAID support instead of FreeNAS (as the documentation recommendfs)?

- What's the output of "zpool status" from the shell?

When I left the house this morning I set up iperf and jperf to run for about 10mins, so I should have a good report of that when I get home tonight.

I originally built this machine to run ESXi so I could get some hands-on with virtual machines, and the on-board RAID controller wasn't compatible with ESXi so I picked up the PERC card. As soon as I got the box built and ready, I came across a free Dell Poweredge 2950 so I decided that I'd just use this box as a dedicated FreeNAS box and the actual server to run my ESXi on. Since it was already assembled and the drives were already set up on the card, I just booted to a thumb drive and loaded FreeNAS.

I have nothing on it though so if you think it would be better removing the RAID card and using the on-board SATA controller, I'm more than willing to reconfigure it.

I'll report back with the zpool status when I get home later.

thanks for the help.
-
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
I'd recommend you setup the hard drives as individual disks and let ZFS handle the "RAID-ing" with a RAIDZ2. That is, unless you have a VERY compelling reason to stay with hardware RAID. Section 1.1.6 of the FreeNAS manual specifically discusses hardware versus software and it can backfire drastically if you use a hardware raid. Alot of people try to "repurpose" FreeNAS to work for them and ignore the manual's recommendations. It's alot easier and pain-free to use FreeNAS in the way it was designed.

On a related note ZFS has it's own caching system, and there can be performance conflicts with using both at the same time. I found out that disabling the read cache on the RAID controller increased my performance by 20%. Your issue isn't likely to be related to your RAID controller cache though.

I was thinking the same thing about the RAID controller. And I'm not dead set on using it, it was just already configured with the card and drives so I just loaded FreeNAS on it as is. (See previous reply)

thanks for the help
 

matram

Dabbler
Joined
Aug 22, 2012
Messages
18
My benchmark results with CIFS

Hi,

My own benchmarks indicated: 27 MB/s for CIFS large file read.

That was with a DELL T710 with dual Xeon processors, 32 GB RAM, PERC H700 with 1 GB NVcache and a 4*2TB RAID5 array using WD RE4 disks.
Server was running ESXi 5.0. Raw performance to the disk array was about 175 MB/s.

I did try with a PERC H200 controller. The H700 had 40% better raw performance than the H200 but there was no effect at all on CIFS performance.

I did also try with Nexenta NAS software which uses Open Solaris and ZFS. Performance was about 30% better than FreeNAS. From this perspective that is not a lot so I assume the performance problem is related to CIFS implementation and there is no "simple fix".

Should you find something, a post with your results would be much appreciated :)

/Mats
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
Hi,

My own benchmarks indicated: 27 MB/s for CIFS large file read.

That was with a DELL T710 with dual Xeon processors, 32 GB RAM, PERC H700 with 1 GB NVcache and a 4*2TB RAID5 array using WD RE4 disks.
Server was running ESXi 5.0. Raw performance to the disk array was about 175 MB/s.

I did try with a PERC H200 controller. The H700 had 40% better raw performance than the H200 but there was no effect at all on CIFS performance.

I did also try with Nexenta NAS software which uses Open Solaris and ZFS. Performance was about 30% better than FreeNAS. From this perspective that is not a lot so I assume the performance problem is related to CIFS implementation and there is no "simple fix".

Should you find something, a post with your results would be much appreciated :)

/Mats

I would almost be happy with a steady 30MB/s but the way it sits right now, I'm getting 5MB/s and it "wavy". It's not a steady stream of data transfer, it's more like a large oscillation from 5MB/s to 0. And sometimes it will just stall completely and fail.

But I'll be sure to post any results I get.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You may have a failing disk. Have you tried doing a SMART tests on your machine?
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
You may have a failing disk. Have you tried doing a SMART tests on your machine?

Not yet but all the disks are brand new WD Reds, they better not be failing or I'm gonna rage :mad:
 

Digidoc

Dabbler
Joined
Oct 30, 2011
Messages
41
Not yet but all the disks are brand new WD Reds, they better not be failing or I'm gonna rage :mad:

Just because they're new doesn't mean that they can't be failing. When I built my FreeNAS system last year I used seven 2TB WD green drives. One failed not even four hours into it being in my system. That's why I actually ended up with seven drives; I bought six at first. I sent the defective drive back to WD and bought another one in the interim. When the drive came back I decided to just re-do my array with seven drives.

Oh yeah... BTW, under CIFS uncheck the "DOS attributes" box. I found that my system got a really nice shot in the arm speed wise when I disabled that. I'll have to look to see what other settings I did, but between my desktop and my server, I regularly saturate my gigabit link. Of course that only happens when I copy data from the 2x128GB SSD's in RAID-0 on my desktop. When I try to copy files from the hard drive to the server it doesn't even come close...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
New drives can absolutely be bad. It happens more often than anyone wants. Google "bathtub curve" or "weibull distribution" to read up on infant mortality. The beginning of life for a device is a period of time where a device is more likely to fail.

I still think you should read the FreeNAS manual on running SMART tests and test each of your drives. You could also use the WD tools available at their website to test the drives. A failing disk will cause intermittent performance issues often leading to copying from/to the server to timeout(which is exactly what you are seeing from your original post).
 

noee

Dabbler
Joined
May 21, 2012
Messages
13
I had a similar problem setting up a FreeNAS test box with brand new Samsung F3 1TBs and CIFS. Turned out to be the SATA cable...

Point is, check disk, cable, port.
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
- Use iperf on FreeNAS (it's included) and jperf on your windows machine to get the networking going right without having to worry about anything else. You should be able to get gigabit speeds. If not, you have work to do. If so, we can go from there.

Looks like I have some network troubleshooting first. Did a 5min test when I got home tonight and the speeds started around 45MB/s and they hovered around 48MB/s.

- Look on the forums for "dd" commands to test the speed of your zpool. It should be fine, but when things don't work, there's no sense in assuming anything. Speaking of which, why did you decide to let a card do the RAID support instead of FreeNAS (as the documentation recommendfs)?

I found a couple post with "dd" commands. I'm assuming this is what you're looking for?
Code:
dd if=/dev/zero of=/dev/null bs=1024k count=20k
20480+0 records in
20480+0 records out
21474836480 bytes transferred in 5.859396 secs (3665025487 bytes/sec)


And this
Code:
dd if=/dev/random of=/dev/null bs=1024k count=20k
20480+0 records in
20480+0 records out
21474836480 bytes transferred in 273.522188 secs (78512228 bytes/sec)



- What's the output of "zpool status" from the shell?

zpool status gets me this
Code:
 pool: FreeNAS
 state: ONLINE
 scrub: none requested
config:

        NAME                                          STATE     READ WRITE CKSUM
        FreeNAS                                       ONLINE       0     0     0
          gptid/c79f99e5-11c5-11e2-8461-000e04b77199  ONLINE       0     0     0

errors: No known data errors


BTW, when I copy a 7.76GB ISO from the FreeNAS box to the windows box I'm getting around 70MB/s. If that helps at all.
7gig from.jpg
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
You may have a failing disk. Have you tried doing a SMART tests on your machine?

I've searched high and low and I can't figure how to do a SMART test. I did run across some commands and some point tonight and one of the results stated they drives had no SMART errors, but I have no idea how to run SMART test from the command line.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
Looks like I have some network troubleshooting first. Did a 5min test when I got home tonight and the speeds started around 45MB/s and they hovered around 48MB/s.

Mine wasn't line speed either so I had to take a look at it (I was new to all this). When I looked at the shell, I noticed it used a TCP window size of 64k. So I went back into jperf and changed "TCP Buffer Length" (in Transport Layer Options) to 64 Kbytes. Bang. 113 MBytes/sec, which is pretty close to max speed (it's about 900mbits instead of 1,000).

I found a couple post with "dd" commands. I'm assuming this is what you're looking for?

Well, it's what I wanted you to do to help yourself diagnose your issues. If the HDD's aren't handling gigabit+ speeds, you're not going see gigabit+ speeds when doing xfers.

zpool status gets me this

OK, so you just have a "pool" of 1 drive. Nothing to see there.

I've searched high and low and I can't figure how to do a SMART test. I did run across some commands and some point tonight and one of the results stated they drives had no SMART errors, but I have no idea how to run SMART test from the command line.

Do a search on "smartctl". That's the command you want but I don't remember the options offhand. It's been covered here on the forum. You might want to run it from an ssh session so you can copy/paste. You want to do a long test.
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
Mine wasn't line speed either so I had to take a look at it (I was new to all this). When I looked at the shell, I noticed it used a TCP window size of 64k. So I went back into jperf and changed "TCP Buffer Length" (in Transport Layer Options) to 64 Kbytes. Bang. 113 MBytes/sec, which is pretty close to max speed (it's about 900mbits instead of 1,000).

OK. I changed the TCP buffer length and got a pretty steady 90MB/s. Good call. So still a little under line speed but much better than the first test.
64k buffer.jpg

I also bypassed the switch earlier just to make sure it wasn't causing any problems and got terrible and erratic speeds. I had it back in the mix when I did the test above though.
crossover.jpg


Well, it's what I wanted you to do to help yourself diagnose your issues. If the HDD's aren't handling gigabit+ speeds, you're not going see gigabit+ speeds when doing xfers.

Sorry, what I meant was are those the kind of test you were wanting to see. I know the first test, if I looked at it correct, was moving data around 29Gb/s and the second was only .628Gb/s. The post I were finding were giving conflicting information on the test though. Some were saying that the zero test wasn't accurate others were saying the random test wasn't. With the SATA III drives, 29Gb/s seems a little far fetched but then the .628Gb/s seems awful.
I'm new to all this so I'm trying to weed out the good info from the bad.

OK, so you just have a "pool" of 1 drive. Nothing to see there.

The hardware RAID card has been mentioned a few times so far. Should I just remove it, use the on-board SATA connectors and reconfigure FreeNAS? The volume I have on here is empty still so I'm not loosing anything by doing that, except my time.

Do a search on "smartctl". That's the command you want but I don't remember the options offhand. It's been covered here on the forum. You might want to run it from an ssh session so you can copy/paste. You want to do a long test.

I did find something with the test but couldn't get it to work. I'll go back and work on it some more.

Thanks again for your help.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
Sorry, that's my bad. The RAID card will indeed present the pool as one drive to FreeNAS. I'm reading/responding to a lot of messages (on various sites) an I didn't go back to read all your details. One of the reasons you may not be able to get SMARTCTL to work could be your controller. This is another reason why it's better to hook the drives directly to the motherboard and let FreeNAS / ZFS manage them. Even the documentation suggests skipping the RAID cards and going direct. Everything I've seen so far tells me that's good advice. It doesn't mean RAID cards can't work, and it doesn't mean there aren't times to use them. But it does mean the default should be not to. Dump it.

We'll have a better idea of what types of issues you have going on with I/O once we can see what's going on with your individual drives (after you dump the RAID card).
 

matram

Dabbler
Joined
Aug 22, 2012
Messages
18
Dell PERC SMART

On a DELL server if SMART status is degraded the LED on the drive caddy will show yellow.

You can also boot into the PERC firmware and check physical drive status.

/Mats
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
Sorry, that's my bad. The RAID card will indeed present the pool as one drive to FreeNAS. I'm reading/responding to a lot of messages (on various sites) an I didn't go back to read all your details. One of the reasons you may not be able to get SMARTCTL to work could be your controller. This is another reason why it's better to hook the drives directly to the motherboard and let FreeNAS / ZFS manage them. Even the documentation suggests skipping the RAID cards and going direct. Everything I've seen so far tells me that's good advice. It doesn't mean RAID cards can't work, and it doesn't mean there aren't times to use them. But it does mean the default should be not to. Dump it.

We'll have a better idea of what types of issues you have going on with I/O once we can see what's going on with your individual drives (after you dump the RAID card).

So here's what I've got so far. I removed the RAID card and hooked everything up to the on-board SATA controller, enabled them as just SATA, no RAID in the mother board BIOS. Removed the old volumes and configured a new one with the 6 WD drives in RAIDZ2.

Ran Iperf and Jperf with the 64k window and got the same 90MB/s speed, mostly stable.
no raid 64k buffer.jpg

Ran the same dd commands and got pretty much the same results

Zero file
Code:
dd if=/dev/zero of=/dev/null bs=1024k count=20k
20480+0 records in
20480+0 records out
21474836480 bytes transferred in 5.340310 secs (4021271621 bytes/sec)


Random file
Code:
dd if=/dev/random of=/dev/null bs=1024k count=20k
20480+0 records in
20480+0 records out
21474836480 bytes transferred in 273.803735 secs (78431496 bytes/sec)


zpool gives me this now

Code:
 pool: freenas
 state: ONLINE
 scrub: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        freenas                                         ONLINE       0     0     0
          raidz2                                        ONLINE       0     0     0
            gptid/10efd3f1-1a1c-11e2-86a1-000e04b77199  ONLINE       0     0     0
            gptid/114dadc7-1a1c-11e2-86a1-000e04b77199  ONLINE       0     0     0
            gptid/11aa50f8-1a1c-11e2-86a1-000e04b77199  ONLINE       0     0     0
            gptid/1206470c-1a1c-11e2-86a1-000e04b77199  ONLINE       0     0     0
            gptid/1264c2f2-1a1c-11e2-86a1-000e04b77199  ONLINE       0     0     0
            gptid/12c11d81-1a1c-11e2-86a1-000e04b77199  ONLINE       0     0     0

errors: No known data errors


I'm running the long SMART test on all the drives right now. I'll report back with my findings.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
I found a couple post with "dd" commands. I'm assuming this is what you're looking for?
No and turn off compression, if enabled, during the test. The dd commands you should be using are the ones in the [thread=981]performance sticky[/thread].

I've searched high and low and I can't figure how to do a SMART test.
How about the manpage? Don't forget to specify the proper device.

The hardware RAID card has been mentioned a few times so far. Should I just remove it, use the on-board SATA connectors and reconfigure FreeNAS? The volume I have on here is empty still so I'm not loosing anything by doing that, except my time.
Reconfigure it at least. ZFS can make better decisions when it knows how many disks it's dealing with. If the RAID card has a write cache with a BBU and you want to take advantage of it that's fine. If it doesn't then there's no reason I can think of to use it.

On a DELL server if SMART status is degraded the LED on the drive caddy will show yellow.
A rather poor substitute to actually look at the SMART info.
 
Status
Not open for further replies.
Top