Building a 20 disk FreeNas fileserver - Build log

Status
Not open for further replies.

Anset

Cadet
Joined
Nov 5, 2016
Messages
8
Hi,

I have been a satisfied user of FreeNas for years now and this forum has been a superb fount of information.

I recently completed a new fileserver and I am writing a small blog about it now.
And since others might find the information useful, I wanted to let you know here about it:

Building a 20 disk FreeNas fileserver - Part 1

- If this is seen as unacceptable self-promotion, I do apologize. Feel free to remove this post
- If it is preferred to not link to content, but copy the content here, let me know and I see about doing that instead
- Otherwise, I'll add an update when the next parts of the build log become available.

Here is the first part of the blog post:

Years ago, I built a fileserver to satisfy my file sharing, music playing, Kodi viewing and backup pleasure. It has definitely served me well over the years, but it really began to show its age. The biggest annoyance was the 10mbps network interface which was adequate for most use but, as you might expect, copying large amounts of data just took ages.

So I embarked on the ever joyful upgrade experience track and today, I am happy to announce that “FileServer Mark-II” is fully up and running.

I started choosing and ordering parts in April 2016. Four months later, in August 2016, the system as partially completed and was taken “into production”. At that time it only used one HBA with 10 new 4TB HDD.

Yesterday, another three months later, I (finally) added the 10 3TB HDD from the old fileserver. I just wanted to be absolutely certain that all the data was copied successfully to the new fileserver, before I erase the original file system.

I will detail the components I used and the reasons why I selected them. I will also include the best price I can find for the components (dated 5-November-2016).

In this first part, I go into detail on the chassis, disk cages and power supply.

The system details:


Case: Antec twelve hundred V3
HDD cages: 4x Icy Dock FatCage MB155SP-B
Power Supply: Seasonic Platinum 660W
Motherboard: Supermicro X10SLH-F
CPU: 4 Core Intel(R) Xeon(R) E3-1241 v3 @ 3.50GHz
Memory: 4x Kingston 8GB PC3-12800 DDR3-1600MHz ECC Unbuffered CL11 1.35V (P/N 9965525-139.A00LF)
HBA: 2x IBM ServeRAID M1015 (IT mode)
Boot device: Supermicro SuperDOM 32GB (SSD-DM032-PHI)
Storage: 10 x 3TB WD Red (WD30EFRX) + 10 x 4TB WD Red (WD40EFRX)
Software: Freenas 9.10


chassis-corner.png chassis-open.png




Of course, any feedback is appreciated and any questions I can answer, I will.

Anset
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
- If this is seen as unacceptable self-promotion, I do apologize. Feel free to remove this post
- If it is preferred to not link to content, but copy the content here, let me know and I see about doing that instead
Well, at least a more detailed abstract would be nice, but I don't really see a problem.

I do have to ask you to upload images directly to the forum, instead of linking them. We're trying to avoid dead image link plague, which mangles thousands of forums around the internet.
 

Anset

Cadet
Joined
Nov 5, 2016
Messages
8
Thanks for the feedback.
I actually linked the image since I did not want to abuse the forums storage. :)
I'll see if I can upload the image instead of linking it.

I will also add a bit more details to the post.

Done and done :D
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I actually linked the image since I did not want to abuse the forums storage. :)
Don't worry about that. I expect that kind of problem on forums for amateur NAS solutions, like unRAID. :p *ducks for cover*

I'll see if I can upload the image instead of linking it.
Just copy-paste the image. That should do the trick.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
By the way, what's your impression of that chassis? I get the feeling it's stuck in 2007 or so, but maybe they quietly changed things to be more in line with the current market (CPU access hole, cable openings, etc.).
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
How's the fan noise on those Icy Dock cages and what are your hard drive temps? I have my system installed in a Antec 1200 with the Antec cages and am curious if drive temps are higher with 5 of them stuffed in one of those cages.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
How's the fan noise on those Icy Dock cages and what are your hard drive temps?
I can speak for the three-in-two version:
Fan noise is bearable, though it dwarfs every other fan around me (backup server is on my desk at the moment, workstation underneath the desk), especially the Intel stock cooler and Noctua fans. Fans are easily replaceable, so you can get PWM fans and control them somehow.

Temperatures are good, during intensive activity, with high-20s temperatures, worst I've seen is 37 degrees Celsius. A surprising amount of heat is felt through the chassis when it comes to this, which can only help.

They're not perfect, though:
  • Fan replacement implies significant disassembly, in most chassis. In my case, the RAM and CPU cooler get in the way.
  • Newer units come with a new style of carrier, which is a real tray (allowing for easy use of 2.5" devices), unlike the old style, which was just a U-shaped bracket. This makes for a tighter fit...
  • ...and I think the SATA connectors are slightly misaligned when the new style of carrier is used. This makes drive insertion a rather terrifying experience sometimes.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Btw, there are three Sata power connectors because a single Sata power connector is only rated for about 2 HDs worth of current.

Thus you need 2 connectors for 4 HDs and 3 for 5.

I have one of the icy dock 5 in 3 cages, the trayless version with temp controller fan in my backup system.

It's not too loud on medium or low, seems to hang out on medium most of the time. It will kick up during scrubs etc.

Noiser than my primary system.
 

Anset

Cadet
Joined
Nov 5, 2016
Messages
8
By the way, what's your impression of that chassis? I get the feeling it's stuck in 2007 or so, but maybe they quietly changed things to be more in line with the current market (CPU access hole, cable openings, etc.).

I have always liked Antec cases, my old fileserver was in a100% silent, passively cooled Aantec P100, so consider my opinion biased. ;)

tldr: if you have the chassis, then it should certainly still serve you well, but if you are looking to buy, there is plenty better cases to be found.


The 1200 is old, that is certain. Its actually pretty much impossible to buy today so yes, it is definitely dated compared to the newer Antec cases. However, I always considered Antec cases to be ahead of the game in many aspects. Combine those two and I really think the case definitely still works. It looks OK (tastes differ of course), is built very solid and the fans are reasonable quiet (though the plexi side-window is an obvious bad acoustic choice). For cable management, it works pretty well.. At least I think my fileserver is pretty clean on the inside. Dust filters are the only thing I would say is really missing since the inside gets dusty real fast.

The main issue I have with this 1200 is that it is not a gaming chassis, nor a server chassis. Antec seems to have tried to take the successful P100 and "game it up" with a side window and fancy colored fans. This made the chassis a lot noisier (both auditive as visual) but it definitely lacks anything more than the basic cable management that, for example, an SLI, watercooled PC could use. A removable mobo tray is also not available, but personally, I don't really value that.

The only reason I got this chassis for my build was the twelve 5,25" bays that open to the front. I have not looked recently but I don't think there is anything on the market that offers that in this single tower form factor. The only alternative I found was the Lian LI PC-D8000 but that one is much bigger, much more expensive and even though it has plenty of drive slots, they are behind the front plate. (Why they did that, I'll never understand). You can get the optional hot-plug drive cages etc... but money-wise it just becomes a nightmare. This chassis is however very modern, featuring all bells and whistles you could want.

Edit: Forgot to mention that the chassis has indeed been revised a couple of times. The one I have is the V3 version. In my opinion, V2 solved some issues with the original chassis, but V3 looks to be mostly a cost cutting effort from Antec.
 

Anset

Cadet
Joined
Nov 5, 2016
Messages
8
How's the fan noise on those Icy Dock cages and what are your hard drive temps? I have my system installed in a Antec 1200 with the Antec cages and am curious if drive temps are higher with 5 of them stuffed in one of those cages.

Noise wise, they are definitely not silent, but my desktop with the double GFX cards is much more noisy. But you can definitely replace the fans with more silent ones. Just make sure the silent fans still push enough air: the 5 drives are stuck very close together so there is not that much airflow possible.

Personally, I can live with the noise but if a fan was to fail some time in the future, I'd put a more silent one in. :)

Cooling is quite ok I think, considering the 5-disk sandwich:

Code:
ada4 35C  ada3 35C  ada2 30C
ada1 37C  ada0 37C  da15 36C
da14 34C  da13 36C  da12 35C
da11 36C  da10 36C   da9 36C
da8 36C   da7 37C   da6 36C
da5 37C   da4 36C   da3 36C
da2 37C   da1 37C   da0 36C


Sure it's a bit higher than I would like, but again, considering the tight packing of disks, they will always run hotter. The WD Red drives are certified to work between 0 and 65 degrees Celsius, so anything below 40, I'll take. :) (But this is also why I value the WD warranty!)

In reply to the points Ericloewe wrote, I mention the new trays in the blog post and yes they are definitely a step up from the old shoehorn design. Concerning the sata misalignment, I cannot say I had any issues... The only bad connection I got was on a tray where I "thought" I had fastened the drive with 4 screws but it apparently only had a single one in there... :oops:

Edit: forgot to mention that my cages have the fan speed set to the middle setting "medium".
 
Last edited:

Anset

Cadet
Joined
Nov 5, 2016
Messages
8
Btw, there are three Sata power connectors because a single Sata power connector is only rated for about 2 HDs worth of current.

Thus you need 2 connectors for 4 HDs and 3 for 5.

OK, but since the power lead coming from my power supply has three connectors on it and I am using those three connectors from the single lead to power the three power ports on the Icy Dock, I would expect the full load is going over each connector anyway? I don't get it... :(
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The issue is how thick the pins are on a Sata power connector.
 

Anset

Cadet
Joined
Nov 5, 2016
Messages
8
The issue is how thick the pins are on a Sata power connector.

Ah, thanks! I figured that since it's all low voltage anyway, that should not be an issue but yup, that does make sense.
 

Anset

Cadet
Joined
Nov 5, 2016
Messages
8
The second part of the build log is now up.

In this part, I detail the reasons for choosing the internal parts I chose. For veteran members of this forum, there will not be many surprises, but I am trying to give a full picture.
I am using two HBA's cards, not something I found a lot of info on myself when I was building this system so hopefully, some one does find it useful information.

In the next (and probably last installment) I'll go over the software setup, showing te (simple) ZFS setup and other services I am running.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Regarding your discussion on e3 vs e5, I think you've missed the point there.

As far as I know, both CPUs support aes-ni. There is no difference in encryption performance.

E5s are called for when you need more than 8 logical cores, or more than 64GB of RAM capacity (unfortunately, your build is limited to 32GB, these days you'd go with skylake/X11 to get a 64GB limit), or more PCIe lanes.

E5s come in single, double and quad processor variants. Single would normally be the recommendation. E7s get you 4 or 8 processor builds and are necesssry when you need stupid amounts of logical cores or ram or PCI lanes in the one system. Although an e5 processor actually has more PCIe lanes than an e7!
 

Anset

Cadet
Joined
Nov 5, 2016
Messages
8
Thanks for the clarification on the security point. I wasn't interested in encrypting the file system so I did not research that point. A poor move on my part so I will definitely put an update in the blog.

Apart from that, my choice for the E3 was entirely based on cost; both purchase and running cost. E5's are more expensive and use more power. I admit I did not think about the RAM limitation. However, 64GB RAM would just be too expensive for a hobby server like this. (It would not pass W.A.F. :) )

I know that FreeNas advises 1GB of Ram per TB of storage and I am below that. However, in the old server I had 30TB of storage with only 8GB of RAM and I never had an issue. Time will tell I guess. :)

Now that I write that, is it one GB ram per TB on the unformatted disks? In that case I would need 70GB RAM... :o
Or is it per TB in the Pool, then I would need 63GB...
Or is it per TB available in the filesystem? In that case, considering I am running with raidz2 and ZFS taking his share, I'd only need about 47GB and then I am at least in the neighborhood with my 32GB! :D


I would qualify the recommendation for the single core: it would probably be sufficient if you only export the filesystem using one protocol, but with a system like this, I do not think I am alone in wanting to do "other" things... I definitely did wanted more than one cores because I am running a couple of jails, including a virtualbox jail. I am sharing both CIFS and NFS and using it as an apple time machine. And I am running a Bareos backup service on it. And I want to start using it as a small AD as well. And... and... and... :) With all that, I do believe the four cores are beneficial.

But I do agree that my build is not a typical system, so I am not disagreeing with the single core recommendation, I would however always put it in context.


But many thanks for the feedback! I will put an update in the blog post later today!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
No, was recommending single processor. Not single core.

RAM guideline is intentionally vague.

E5 processors support up to 1.5TB of RAM. Perhaps more.

Skylake E3 = 64GB
Haswell E3 = 32GB

I think E3 was the right decision :)
 
Status
Not open for further replies.
Top