My FreeNAS Benchmark - 109.2MB/s W - 109.4MB/s R

Status
Not open for further replies.

tmacka88

Patron
Joined
Jul 5, 2011
Messages
268
Hey guys,

I just did a quick benchmark on my FreeNAS system and thought I would shared it with everyone. I used my imac which has a SanDisk Extreme II 480GB SSD (therefore would out perform the NAS).

My current configuration is as follows:

FreeNAS-9.1.1-RELEASE-x64 (a752d35)
Supermicro X10SLM-F Motherboard
Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
32GB ECC Ram
6 X WD Green 3TB in RAIDz2

According to "Disk Speed Test", I am getting:
109.2 MB/s Write
109.4 MB/s Read
This is using a AFP Share.
FreeNAS Speed Test.png

Not sure how accurate this program is but seams quite accurate after testing and few other drives such as a USB2.0 external HDD etc.

With these speeds, it is possible I am maxing out my LAN connection (Gigabit network = 128 MB/s theoretical - I will confirm later whether my iMac is using cat5e or 6 - but my iMac to router is cat6 ) to the FreeNAS, so I wondering if I setup Link Aggregation (LACP) if these speeds will increase or stay where they are.

I am quite happy with these speeds as most people would be but I kind of expect that from this machine after all it cost quite a bit. However, totally worth it :).
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
green discs? omg

don't be so lazy, use the forum search function (iperf, speed, test)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
With these speeds, it is possible I am maxing out my LAN connection (Gigabit network = 128 MB/s theoretical - I will confirm later whether my iMac is using cat5e or 6 - but my iMac to router is cat6 ) to the FreeNAS, so I wondering if I setup Link Aggregation (LACP) if these speeds will increase or stay where they are.

Link aggregation will not do anything for you, unless you have several clients using the server at once. Also, GbE requires Cat 5e (for relatively short runs, not the whole 100m that does require Cat 6).

If those numbers are accurate, you're maxing out your connection (keep in mind there're several layers of overhead).
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Also, GbE requires Cat 5e (for relatively short runs, not the whole 100m that does require Cat 6).
Cat5e will certainly go to 100m for GigE. Cat6 isn't required until you go to 10Gbase-T. That has a length limit of 30-55m. Cat6a will go 100m for 10Gbase-T.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Cat5e will certainly go to 100m for GigE. Cat6 isn't required until you go to 10Gbase-T. That has a length limit of 30-55m. Cat6a will go 100m for 10Gbase-T.

On paper, but I've heard that many Cat. 5e cables have trouble close to the 100m. Never actually needed such a long run, fortunately...
 

tmacka88

Patron
Joined
Jul 5, 2011
Messages
268
green discs? omg

don't be so lazy, use the forum search function (iperf, speed, test)


I don't understand, why do I need to search the forum?

I am really happy with my speeds and in which are not limited by the base model WD HDD's. Yes, they are not the Red NAS HDD's but I couldn't see the benefit for the extra money. So the Red can probably stand higher temps etc etc but my setup is in RAIDz2 setup with notifications so I can replace a drive ASAP if it fails. So far I have used WD Greens for 3 or so years now in FreeNAS and only had one die which was straight after purchase (must have been a dud). Yeah they might not last as long potentially, but they are cooled very well and my MB has a buzzer that sounds when above a set temp think 50 degrees C or so? (also will be setting up a relay to turn on a larger fan on my server rack when above a certain temp - for summer). Please tell me the benefit on spending an extra 60$/HDD ($360) at the time. I honestly don't see an upside for my setup. Please feel free to correct my assumption though. As for speed with a RED, maybe I could get better speeds, but only with LACP as I am already maxing out my LAN with the Greens. To me thats bloody good. I will be giving it a shot soon hopefully, as more speed is always better haha especially not no extra cost.

Link aggregation will not do anything for you, unless you have several clients using the server at once.

How won't Link Aggregation help increase speed (if the drive's aren't maxing out that is and LAN is the bottle neck). Are you not confusing this with Load Balance?

Also theoretically, even though you are using a Gigabit switch you are generally not quite getting those speeds at the switch due to losses in the cable etc (even with less then 100m runs). Using a Cat6 cable should potentially get you as close to the Gigabit speeds at the switch. I am sure that the differences would be very small. Still worth the thought. I will do some tests and actually see if there is a difference when I get a chance, would be interesting to know.

Cheers
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
To answer the OP question in the first posting, I do not believe LACP will gain you any benefit unless you do a full implementation of it (special hardware required) and you are correct that your NIC is the limitation.
 

tmacka88

Patron
Joined
Jul 5, 2011
Messages
268
To answer the OP question in the first posting, I do not believe LACP will gain you any benefit unless you do a full implementation of it (special hardware required) and you are correct that your NIC is the limitation.

Yes, I am in the process of getting a SRW2024 cisco switch that support LACP. I don't need the extra speed or anything but I do need a larger switch which I can get cheap that supports it so, thought I would give it a shot. I won't be disappointed if I don't gain any more speed. More curious than anything.

Cheers
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Yes, I am in the process of getting a SRW2024 cisco switch that support LACP. I don't need the extra speed or anything but I do need a larger switch which I can get cheap that supports it so, thought I would give it a shot. I won't be disappointed if I don't gain any more speed. More curious than anything.

Cheers
Post your results but be specific on what you do. Many people would like to know how something like that turns out.
 

tmacka88

Patron
Joined
Jul 5, 2011
Messages
268

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
How won't Link Aggregation help increase speed (if the drive's aren't maxing out that is and LAN is the bottle neck). Are you not confusing this with Load Balance?

Also theoretically, even though you are using a Gigabit switch you are generally not quite getting those speeds at the switch due to losses in the cable etc (even with less then 100m runs). Using a Cat6 cable should potentially get you as close to the Gigabit speeds at the switch. I am sure that the differences would be very small. Still worth the thought. I will do some tests and actually see if there is a difference when I get a chance, would be interesting to know.

Cheers

Don't expect any significant (as in beyond random fluctuations) changes moving from Cat. 5e to Cat. 6.

If you use LACP, clients get distributed to the various interfaces available. You can't trivially send data from two interfaces to two interfaces on the same client. You also need compatible networking hardware between the server and clients. I believe the latest SMB version included in Windows Server does have some black magic that allows such a setup, but it's the only such solution at the moment, to my knowledge.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
On paper, but I've heard that many Cat. 5e cables have trouble close to the 100m. Never actually needed such a long run, fortunately...
In reality they go past 100m by a small margin. I have many runs around 100m at different locations for work. Never been an issue unless the vendor mucked up the ends.

Many Cisco switches (not home or small business) offer tdr testing which will tell you the length and where issues in the cable are at (if any). This is how I verify our runs before signing off with the cable vendor. I don't do any of the runs myself, just network equipment.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
How won't Link Aggregation help increase speed (if the drive's aren't maxing out that is and LAN is the bottle neck). Are you not confusing this with Load Balance?

Link aggregation (with the proper 802.3ad LACP) speeds things up when you have many connections hitting the same server, but does nothing for a single connection, which will max out at the top speed of one of the member links.

The exception is SMB3 (which is only Windows 8+). It splits traffic over multiple connections so a single transfer can take advantage of multiple links. Pretty cool stuff actually.

I don't understand, why do I need to search the forum?

I am really happy with my speeds and in which are not limited by the base model WD HDD's. Yes, they are not the Red NAS HDD's but I couldn't see the benefit for the extra money. So the Red can probably stand higher temps etc etc but my setup is in RAIDz2 setup with notifications so I can replace a drive ASAP if it fails. So far I have used WD Greens for 3 or so years now in FreeNAS and only had one die which was straight after purchase (must have been a dud). Yeah they might not last as long potentially, but they are cooled very well and my MB has a buzzer that sounds when above a set temp think 50 degrees C or so? (also will be setting up a relay to turn on a larger fan on my server rack when above a certain temp - for summer). Please tell me the benefit on spending an extra 60$/HDD ($360) at the time. I honestly don't see an upside for my setup. Please feel free to correct my assumption though. As for speed with a RED, maybe I could get better speeds, but only with LACP as I am already maxing out my LAN with the Greens. To me thats bloody good. I will be giving it a shot soon hopefully, as more speed is always better haha especially not no extra cost.
He is probably referring to the intellipark issue. I thought the same way you did at first, but 2-3 years of regular use seems to be all it takes to get the head park count up to 300,000, which is the stated top limit for the green drives.
 

tmacka88

Patron
Joined
Jul 5, 2011
Messages
268
He is probably referring to the intellipark issue. I thought the same way you did at first, but 2-3 years of regular use seems to be all it takes to get the head park count up to 300,000, which is the stated top limit for the green drives.
ahh ok. Yeah totally, missed that. looking at a few things online, wd has a utility to disable this intellipark feature. I will check to make sure it works on my HDD if so will just disable it.
 

RoboKaren

Contributor
Joined
Apr 8, 2014
Messages
130

tmacka88

Patron
Joined
Jul 5, 2011
Messages
268
doing a bit of google wd released a firmware utility where you can disable the intellipark. therefore should make it similar to a wd red.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
doing a bit of google wd released a firmware utility where you can disable the intellipark. therefore should make it similar to a wd red.

I wrote a very detailed how-to on the whole problem as well as how to solve it... It's in the guides section of the forum if anyone is interested.
 

tmacka88

Patron
Joined
Jul 5, 2011
Messages
268
thanks. will definitely take a look.
 
Status
Not open for further replies.
Top