Pheran's 32TB FreeNAS build with photos

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
Could not resist... heres mine!! Even got that yellow color :D

DSC_2801.JPG


For those interested, i have 4 PCH ports connected to a ICYDock for my SSD's where all my VM's reside.
The extra dual port nic is connected to a pfSense VM :)
 

Revolution

Dabbler
Joined
Sep 8, 2015
Messages
39
Is the Fractal Design Dynamic GP-14 silent? because my Define R5 is realy silent at the moment and I need another Fan in the front.
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
Regarding the fan headers... i have 5 fans. And so i use all FAN headers. FANA for CPU. Might not be correct, but who cares? It works. Been running happily since... eeeh... when i bought it 2 years ago. :)
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
SO. MUCH. WIN.

I wasn't expecting to do this when I built this server, but this FreeNAS box just went from great to #$@!* amazing for an additional investment of $79.

I noted in post #3 that benchmarking this box is fairly pointless because the gigabit connection is an obvious bottleneck - you can transfer files at 110 MB/s (effectively filling the gig link) and the box is just yawning. Using link aggregation (multiple gigabit links) won't make anything faster for a single client because each session will only use one link. The only real option to increase single client speed is 10-gigabit ethernet. I assumed that this would be cost-prohibitive, and for many use cases it still is. But here's the thing - 10G switches are cost prohibitive, and many clients aren't equipped to use them anyway. Ten gig NICs, on the other hand, can be gotten cheaply - far more cheaply than I expected. In my home, if I'm doing any heavy-duty work or transferring a lot of data (Blu-ray rips for example), I'm using my primary PC, which is sitting right next to the FreeNAS server. In fact you can see a picture of it in post #3.

I realized that if I could have a 10 Gbps path to FreeNAS for just this one PC, it would be enormously useful. For that, you don't need a switch. You only need two 10G NICs and a way to connect them. After looking around Ebay, I discovered that Chelsio S310E-CR NICs are dirt cheap - seriously, $25 or so! These are supported by FreeNAS and have an SFP+ slot on the card. Ten gig connections often use SFP+ transceivers with optics and fiber, but there's a cheaper way for short runs. You can buy a copper twinax cable with SFP+ modules on each end that plugs straight into these cards. I got a Tripp Lite N280-01M-BK cable off Ebay for $20. By the time I added shipping, my total cost for 2 cards and the twinax cable was $79.

The NICs arrived today so I installed them into the FreeNAS server and my Windows 7 box. The install was practically effortless. FreeNAS immediately recognized the card and gave me a new cxgb0 interface. I downloaded the Windows drivers from Chelsio's support site and installed them with no trouble. You do have to manually configure both NICs when doing this. These NICs are not part of your regular network - you are creating a very small second network with only 2 machines on it. I arbitrarily chose to use a 192.168.250.0 subnet for this network, with FreeNAS on 192.168.250.1 and my Windows PC on 192.168.250.2. I then disconnected my existing CIFS drives in Windows (which were using my main 192.168.1 network). I mapped new drives via \\192.168.250.1 so that they would actually use the 10G NIC. Time for testing!

I'll cut to the chase - I can now copy files between my PC and FreeNAS at up to 500 MB/s (4 Gbps). This is a huge improvement over what I had, I couldn't be happier. Considering that the SATA3 link to the SSD in my desktop maxes out at 6 Gbps, I'm not at all surprised that the transfer is only 4 Gbps. Don't expect a full 10 Gbps if you do this.

Two caveats if you are thinking about replicating this. First, the S310E-CR is basically an obsolete card. It works fine in FreeNAS 9.3 and Windows 7 x64, but I cannot guarantee that it works with Windows 8 or 10. Second, don't think you are going to get 500 MB/s if your client is using a spinning drive. The only reason this works is that I have a fast SSD on the client side, and 8 spinning drives feeding me data (at ~70 MB/s each) on the server side.

I'll try to update this post over the weekend with a couple of photos.

EDIT: If you read farther in the thread you'll find that the Chelsio card didn't work out all that well in my desktop because it screwed up recovery from sleep mode. It does still work fine in the FreeNAS server though.
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
10G switches are cost prohibitive, and many clients aren't equipped to use them anyway. Ten gig NICs, on the other hand, can be gotten cheaply

Ok, now you gave me an idea: make your own switch with an unused server and some NICs :)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Ok, now you gave me an idea: make your own switch with an unused server and some NICs :)
Probably wouldn't be able to do line speed, though.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Probably wouldn't be able to do line speed, though.

I can trivially build a line rate 10G switch out of a server and some good cards. The trick is that it can't switch small packet traffic at line rate, just larger packet traffic.

I've been a fan of software based networking (now en vogue as "Software Defined Networking" etc) for several decades. The software is nowhere near as capable as hardware; my hardware routers can handle just shy of a billion packets per second but software will be less than 1% of that. But most of the time that could be fine for a small network!
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yep of course I wasn't talking about making a 48 ports switch for a company but more of a small switch for home network.
 

xnaron

Explorer
Joined
Dec 11, 2014
Messages
98
SO. MUCH. WIN.

I wasn't expecting to do this when I built this server, but this FreeNAS box just went from great to #$@!* amazing for an additional investment of $79.

I noted in post #3 that benchmarking this box is fairly pointless because the gigabit connection is an obvious bottleneck - you can transfer files at 110 MB/s (effectively filling the gig link) and the box is just yawning. Using link aggregation (multiple gigabit links) won't make anything faster for a single client because each session will only use one link. The only real option to increase single client speed is 10-gigabit ethernet. I assumed that this would be cost-prohibitive, and for many use cases it still is. But here's the thing - 10G switches are cost prohibitive, and many clients aren't equipped to use them anyway. Ten gig NICs, on the other hand, can be gotten cheaply - far more cheaply than I expected. In my home, if I'm doing any heavy-duty work or transferring a lot of data (Blu-ray rips for example), I'm using my primary PC, which is sitting right next to the FreeNAS server. In fact you can see a picture of it in post #3.

I realized that if I could have a 10 Gbps path to FreeNAS for just this one PC, it would be enormously useful. For that, you don't need a switch. You only need two 10G NICs and a way to connect them. After looking around Ebay, I discovered that Chelsio S310E-CR NICs are dirt cheap - seriously, $25 or so! These are supported by FreeNAS and have an SFP+ slot on the card. Ten gig connections often use SFP+ transceivers with optics and fiber, but there's a cheaper way for short runs. You can buy a copper twinax cable with SFP+ modules on each end that plugs straight into these cards. I got a Tripp Lite N280-01M-BK cable off Ebay for $20. By the time I added shipping, my total cost for 2 cards and the twinax cable was $79.

The NICs arrived today so I installed them into the FreeNAS server and my Windows 7 box. The install was practically effortless. FreeNAS immediately recognized the card and gave me a new cxgb0 interface. I downloaded the Windows drivers from Chelsio's support site and installed them with no trouble. You do have to manually configure both NICs when doing this. These NICs are not part of your regular network - you are creating a very small second network with only 2 machines on it. I arbitrarily chose to use a 192.168.250.0 subnet for this network, with FreeNAS on 192.168.250.1 and my Windows PC on 192.168.250.2. I then disconnected my existing CIFS drives in Windows (which were using my main 192.168.1 network). I mapped new drives via \\192.168.250.1 so that they would actually use the 10G NIC. Time for testing!

I'll cut to the chase - I can now copy files between my PC and FreeNAS at up to 500 MB/s (4 Gbps). This is a huge improvement over what I had, I couldn't be happier. Considering that the SATA3 link to the SSD in my desktop maxes out at 6 Gbps, I'm not at all surprised that the transfer is only 4 Gbps. Don't expect a full 10 Gbps if you do this.

Two caveats if you are thinking about replicating this. First, the S310E-CR is basically an obsolete card. It works fine in FreeNAS 9.3 and Windows 7 x64, but I cannot guarantee that it works with Windows 8 or 10. Second, don't think you are going to get 500 MB/s if your client is using a spinning drive. The only reason this works is that I have a fast SSD on the client side, and 8 spinning drives feeding me data (at ~70 MB/s each) on the server side.

I'll try to update this post over the weekend with a couple of photos.

Thanks for posting this. I am going to order the cards today. I found some cheap twinax copper cables on ebay. The model you listed can only be found for as low as $50. There are many other makes for much cheaper. I am assuming these sfp module twinax copper cables are standardized?

I'm just trying to confirm now whether this will work on Ubuntu 14.04.3 LTS 3.13.0-40-generic. Looks like it should as it is supposed to use cxgb3 module which is installed.
 
Last edited:
Joined
Sep 18, 2015
Messages
6
Thanks for this post Pheran! I'm looking at building an almost identical box and this is great stuff! Thanks!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Thanks for posting this. I am going to order the cards today. I found some cheap twinax copper cables on ebay. The model you listed can only be found for as low as $50. There are many other makes for much cheaper. I am assuming these sfp module twinax copper cables are standardized?

SFP+ twinax is fairly likely to work, but not absolutely guaranteed. The only thing that I would expect to work as consistently as possible is to actually get the proper SFP+'s and to use fiber. 10GigE has a very good interoperability track record within the realm of hardware we've been suggesting to users.
 

xnaron

Explorer
Joined
Dec 11, 2014
Messages
98
SFP+ twinax is fairly likely to work, but not absolutely guaranteed. The only thing that I would expect to work as consistently as possible is to actually get the proper SFP+'s and to use fiber. 10GigE has a very good interoperability track record within the realm of hardware we've been suggesting to users.

I bought this one http://www.ebay.com/itm/181883137133 hopefully it will work. Here is the cable/adapter compatibility guide for various chelsio cards including the s310e-cr http://www.chelsio.com/wp-content/uploads/2011/07/Cable-configs-July-2012.pdf
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I'd give the Cisco ones a better-than-average chance of working because so much networking gear gets hooked up to Cisco. People hate it when the common stuff doesn't work.
 

lmannyr

Contributor
Joined
Oct 11, 2015
Messages
198
Pheran,

So buying one extra fan for the front of the R5 would be sufficient? How are those 2 old drives that peaked at 50C? Are they not in the cooling path of the two front fans?

I'm ordering parts so just checking if I should get 1 or 2 extra fans.

Thanks
 

Pheran

Patron
Joined
Jul 14, 2015
Messages
280
Pheran,

So buying one extra fan for the front of the R5 would be sufficient? How are those 2 old drives that peaked at 50C? Are they not in the cooling path of the two front fans?

The text right above that output states that those 2 drives only saw 50C because they were used in another (not as well ventilated) server before. As you can see their temps are nowhere near those values now.
 
Top