BUILD Hardware selection - 1st FreeNAS build

CheckYourSix

Dabbler
Joined
Jun 14, 2015
Messages
19
Has there been any updates? I'm really interested in this build. Also, if you won't mind, what do you use the ESXi VMs for?
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
Has there been any updates? I'm really interested in this build. Also, if you won't mind, what do you use the ESXi VMs for?

Unfortunately not much. The system has been running stable and cool with only a single shutdown, just this evening, due to a 45 minute power loss during a storm. I've been busy with spring house/property duties, a few repairs to the daily cars, pulling out our classic cars, family time and a couple trips so I haven't had much time to play around with the system.

I did resolve the bios issue which had the Sata boot options disappearing. Turned out one of the Sata DOMs I had was faulty. Replaced it and all has been fine.

It has been moving and sharing some movie files without issues and the 10GbE network awesome. The Intel X540T1 cards and Netgear switch work amazingly fast at moving data and I'd recommend them to those who have the cash, which is the only downside and is expensive no doubt. I had one evening where I had 4 transcoding streams running while moving about 8TB of data and the system didn't blink. The server itself, housed in the basement remains cool and blazing fast. CPU temps have yet to hit 45c and all 19 hard drives remain between 27-29c idle with a max temp of 34c when streaming. Power consumption averages around 233W with the highest peek at 288.

There is no doubt this system will take anything I toss at it with no issues. It's an outstanding system for our needs. It's certainly more then we need but overbuilt is my preferred way with systems.

I picked up a spare M1015 for $85 bucks to have and another 7 drives to fill it up and with 2 spares to sit on a shelf along side the M1015. I haven't been able to install the new drives yet.

As for the ESXi system.. Not a heck of a lot yet, again, it's a time issue. Mostly got it running to play around with as I've never touched it before so in the little time I get to play I spend it learning the ins and outs of the ESXi system. Honestly, I'd prefer to have separate hardware and systems without a doubt however virtualization cuts the costs immensely! Currently I have a Windows 10 VM running for the wife which she uses for something everyday though I don't know what. I have a Debian test VM running a full desktop install and another very small Debian VM running as a network print server for our numerous printers.

I've also built a new pfSense system as our network gateway. Hardware includes a Supermicro C2758 A1SRI-2758F board, 2 x 8GB Kingston KVR16LSE11/8 ram and Intel SSD S3500 120GB drive in a Supermicro CSE-510T-200B chassis for a full install. The s3500 drive died on me within a few days and took a couple weeks to get a replacement but the system is dead silent, runs pretty much cold and takes no power to speak of. Great build if anyone is looking for a pfSense firewall, gateway, router, etc.

Can't really go wrong with the freenas hardware setup I used, the install is straight forward and just setup the system for your requirements. For savings, skip the 10GbE networking, drop down a step in the CPU department, half the ram to start, setup 6 or 8 drives to begin with and it's a huge price drop initially that would give an awesome setup for any home user with tons of expansion far into the future.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
It has been moving and sharing some movie files without issues and the 10GbE network awesome. The Intel X540T1 cards and Netgear switch work amazingly fast at moving data and I'd recommend them to those who have the cash, which is the only downside and is expensive no doubt. I had one evening where I had 4 transcoding streams running while moving about 8TB of data and the system didn't blink. The server itself, housed in the basement remains cool and blazing fast. CPU temps have yet to hit 45c and all 19 hard drives remain between 27-29c idle with a max temp of 34c when streaming. Power consumption averages around 233W with the highest peek at 288.
Which Switch do you have?

Can't really go wrong with the freenas hardware setup I used, the install is straight forward and just setup the system for your requirements. For savings, skip the 10GbE networking, drop down a step in the CPU department, half the ram to start, setup 6 or 8 drives to begin with and it's a huge price drop initially that would give an awesome setup for any home user with tons of expansion far into the future.
Good recommendation, half the RAM and lower CPU are indeed enough for most home users! ;)
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
I have the Netgear ProSafe XS708E 10GbE switch and use 4 of the Intel X540-T1 network cards in my desktop PC, FreeNAS server, ESXi server and an email/web server for now.

Yeah.. Most home users don't need anything like this or many of the other builds here. Really.. My system is way overboard for most and that includes myself. :D Most folks don't really need to step up to the DDR4 systems in truth. While some things are just a bit more the ram is still a major expense with quad channel.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
You could also start with the suggested reduced system in a smaller chassis and without the expanded/s and move up later...
 
Joined
Oct 2, 2014
Messages
925
  • Like
Reactions: MtK

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
Hey DataKeeper,

So I got one of those Supermicro SATA DOM (the 32GB one, just a single one), and popped it into one of the Orange SATA connectors.

I enabled the SATA controller (not the sSATA one), and I can see it being connected to port 4 as seen here:

satadombios.PNG


I also see it as a boot override option here:

satadombios2.PNG


But alas, like you , it does not appear under boot priorities. I think it is unlikely that I have a faulty SATA DOM as well, but I suppose its possible.

I did go ahead and boot back into FreeNAS and mirrored my 16GB USB onto the 32GB SATA DOM.

I wonder if I reboot, and pull the USB, I can simply choose the boot override option and boot from the DOM with no ill effects?
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
I'd say to remove the USB and also remove and reseat the Sata DOM then reboot and see if it works.

Thats exactly the issue I was seeing in that the Sata DOM was not being seen as a boot option. Not good to see it happening with someone else.. :(
 

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
I did as you suggested (except reseating the Sata DOM since that would have meant pulling everything back out of the rack), and it does still not show up as a boot option. However, I was able to choose it from the Boot Override list as I exited the BIOS. Boot speed is about the same as booting from USB, but at least SATA DOM should be a bit more reliable than USB from what I gather.

Of course I'm now getting the boot volume in a degraded state alert since I pulled the USB and I'm running off the SATA DOM mirror. Next time I take the whole system down and pull out the chassis, maybe I'll try moving the DOM to the SATA5 slot to see if that helps.
 
Joined
Oct 2, 2014
Messages
925
I did as you suggested (except reseating the Sata DOM since that would have meant pulling everything back out of the rack), and it does still not show up as a boot option. However, I was able to choose it from the Boot Override list as I exited the BIOS. Boot speed is about the same as booting from USB, but at least SATA DOM should be a bit more reliable than USB from what I gather.

Of course I'm now getting the boot volume in a degraded state alert since I pulled the USB and I'm running off the SATA DOM mirror. Next time I take the whole system down and pull out the chassis, maybe I'll try moving the DOM to the SATA5 slot to see if that helps.
You dont have rack rails for any of those servers? I just picked up a rack specially so i dont have to rack and unrack them to get to the bottom or middle.
 

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
I have one set of rack rails, but they don't fit the middle server as the hole spacing appears to have changed somewhere along the way with the 846 chassis. I have it on my todo list to look at when the change occurred and pick up another couple of sets :)

I'm also looking at a bracket for the LCD monitor to tighten that up a little better as well.
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
Sigh.. I still haven't purchased a rack and everything is sitting neatly out on a table, my workbench and table saw for now. I'll need to find something soon. Thankfully, all my rack chassis included rails except for the small one I bought for the pfSense server which I don't think accepts rails at all. Its so small and light its not a big deal anyways.

I've run the updates and have rebooted a number of times now without any further DOM issues thankfully. You will never see any difference in boot speed! Its the speed of updates where you'll see difference and, imo, well worth the price of the DOMs when factored in with the higher reliability. I'm still planning to order 1 more to have on the shelf as a spare next to the spare hard drives and M1015 card. *I* know if/when something fails, everyplace in the world will be sold out of what I need and it'll be 3-4 weeks before they get any stock back in. :confused:
 
Joined
Oct 2, 2014
Messages
925
I have one set of rack rails, but they don't fit the middle server as the hole spacing appears to have changed somewhere along the way with the 846 chassis. I have it on my todo list to look at when the change occurred and pick up another couple of sets :)

I'm also looking at a bracket for the LCD monitor to tighten that up a little better as well.

I picked up a 25U mobile rack, and just started racking all the servers and all last night. I gotta tiddy up the cable management and all that jazz before it looks good. I originally looked at this 42U rack but then realized at some point it may need to move around the basement depending on whats happening and if people need to get in to work on the water heater and AC and such. So i went with a 25U mobile-ish rack , it says supports up to 1200 odd pounds, so i have my 4U chassis , 2 rack mount APC's , a iSCSI SAN + room for a second, and my 2 ESXi hosts. I will probably look into a rack mount KVM next.


Sigh.. I still haven't purchased a rack and everything is sitting neatly out on a table, my workbench and table saw for now. I'll need to find something soon. Thankfully, all my rack chassis included rails except for the small one I bought for the pfSense server which I don't think accepts rails at all. Its so small and light its not a big deal anyways.

You did get lucky, i had to buy rails for the rest of my servers, i had to buy about 150$ worth of rack rails...what a pain in the balls that was. The above racks are pretty nice, the mobile one feels very sturdy and i trust it to support all my stuff. I might end up investing in a second 24 4U chassis.
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
There is a localish shop here that gets in government surplus fully enclosed racks quite often I'm told. Lots of full height racks in stock but I'd prefer a 36U at most. Was told most are in great shape and they usually run about 350.00 picked up. They simply haven't had any in the past few months.

Well.. I installed Plex this morning while everyone was asleep. I then made bacon and eggs and coffee for myself, and everyone else, as well as serving them in bed... on Father's day! :D I enjoyed it greatly since they had planned on making me breakfast of course. ;)

Anyways.. Loving this system! I had a 2TB transfer going, transcoding 3 streams going to an iphone, an ipad and my smart tv, then added a music directory with roughly 120k tracks to add, download data for each, etc.

System load hasn't gone above 15% yet :D
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
Figured I'd post this here in the build thread as well..
18 WD Reds. Pool is 3vdevs of 6 drives each in raidz2 giving 63.1TB - Usable is 40.7TB. I will be adding another 6 drives soon however it will be a separate pool.

[root@FileServ] /mnt/garage# dd if=/dev/zero of=/mnt/garage/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 105.864436 secs (1014261130 bytes/sec)

[root@FileServ] /mnt/garage# dd if=/mnt/garage/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 93.633825 secs (1146745659 bytes/sec
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
I have the Netgear ProSafe XS708E 10GbE switch and use 4 of the Intel X540-T1 network cards in my desktop PC, FreeNAS server, ESXi server and an email/web server for now.
I'm intrigued... how is the pfSense box (with its quad-NIC) connected to all this?
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
I'm intrigued... how is the pfSense box (with its quad-NIC) connected to all this?

I'm only using 2 of the 4 NICs in the pfSense box at this time but planning to use a 3rd port down the road. Little time and my entire home network in a mess right now and likely to remain so for several more months as I slowly continue to rewire/renovate the house and reconfigure the network.

Currently... I'm using 2 networks for this to work. The first is setup on the crappy FIOS wifi/router and we'll go with 192.168.1.1/24. The FIOS routers IP is 192.168.1.1. I simply connect the assigned WAN port of the pfSense box into the FIOS routers built in switch. Assign the pfSense WAN port an IP address of 192.168.1.2 and set .1.1 as the pfSense gateway and it now has access. Wifi is disabled on the FIOS router, everything is passed through to the pfSense box, aside from TV set-top boxes, and that's all there is for that network. The second network we'll use 192.168.5.1/24. The pfSense LAN1 port is assigned say 192.168.5.1 and I simply leave its gateway empty. LAN1 plugs into the Netgear XS708E 10GbE switch. The pfSense box manages all the IPs for my home network and assigns 192.168.5.1 as the gateway.

Yes.. its mucked up! It does however work and takes all of 5 minutes to setup quickly. I don't believe I did anything special aside from the basic setup for it to work.

When I get an hour or two I'll be removing the FIOS router and coax line from the network altogether. I'll run a new Cat6a line from the outside FIOS demarc inside to the server room and plug it directly into the pfSense box WAN port. I really just hate calling into FIOS support (or any "support" for that matter) to make the demarc configuration change. Because the network does actually work I have other things I need to complete before doing so.


The pfSense build includes the following hardware running roughly $700 bucks. Purchased everything through Amazon Prime when a deal hit. It is pretty much the same rack system pfSense sells on their store for twice that amount. It's almost dead silent, no heat and powerful enough to handle anything I'll ever need or wish to pass through it.

Mainboard: Supermicro C2758 A1SRI-2758F
Memory: 2 x 8GB Kingston KVR16LSE11/8 (only 1 is needed)
Chassis: Supermicro CSE-510T-200B
Drive: Intel SSD S3500 120GB
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
thanks @DataKeeper was wondering about the "port waste" on the switch :)
and yes, 700$ is a bit overkill for most home users... ;)

any reason you went for the Intel SSD, and not a Samsung 850 EVO or something?
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
I don't see it as "port waste" but future "port expansion". ;) I will be using Port3 later this or early next year so it really just leaves me with a single unused port.

I just started using SSD drives this year and most of my knowledge is based on what I quickly read back at the beginning of the year. Mainly I selected the Intel S3500, while older technology, for its proven reliability and the fact it has an on-board powerloss capacitor where the 850 Pro consumer drives do not. Yes, everything is plugged into a UPS but that added protection is nice. Still.. seeing how the 850s have gained in market this past year if I were to build another one now.... I'd likely go with a Samsung. :rolleyes: I've since purchased 4 850Pros (128GB, 2 x 256Gb and a 1TB) for other systems and have had no issues with them.

If I think the read/write speeds of the Intel are to slow its a simple and quick swap though I doubt I'll need to.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
I don't see it as "port waste" but future "port expansion". ;) I will be using Port3 later this or early next year so it really just leaves me with a single unused port.
I was referring to the 10GbE ports in your switch
 
Top