BUILD First FreeNAS Build - Critiques and Suggestions Welcome

Status
Not open for further replies.
Joined
Feb 5, 2014
Messages
6
This server will be hosting all of my family's data and media: photos, home movies, financial documents, music, recorded TV shows, and ripped DVDs/BluRays. I'm very close to pulling the trigger on this build, but before I do I would love to have y'all pick it apart.

Mobo: Supermicro X10SLL-F-O
CPU: Xeon E3-1220v3
RAM: 2x Samsung M391B1G73BH0-CK0 DDR3-1600 8GB ECC - 16GB total
HDD: 4x WD Red 3TB
Case: Fractal Design Arc Midi Tower ATX Mid Tower
PSU: SeaSonic SSR-360GP 360W 80+ Gold Certified ATX Power Supply
UPS: CyberPower CP1000PFCLCD 1000VA/600W Pure Sine Wave UPS

After much thought, I decided to go with 6TB of capacity (4x WD Red 3TB in RAIDZ2) for this NAS. At first I was planning 10+TB, but after considering the rate at which we generate data (<1TB/year), it seemed more reasonable to go this route now. My thinking is that I could always add another 4 drives later if we ever need the capacity, and we likely won't for another 6 years.

My biggest hangups right now are with the CPU and UPS. I'm inclined to go with the Xeon "just to be sure," but I can't help but think an i3-4130 would work just as well for me - and save quite a bit of money to boot. I'm planning on using Plex and/or Subsonic to serve up my media, so transcoding performance will be important. I don't anticipate a need to transcode more than 3 HD video streams at once. Can the i3 handle this?

For the UPS, I need it to be able to tell the NAS to shutdown cleanly (This model is listed on the Network UPS Tools compatibility list), and have just enough power to perform that. I don't really need any extended uptime beyond that. Based on what I've seen on these forums, I believe I need a pure sine wave UPS, since my PSU is Active PFC. Will this model work for my needs?

Thanks for taking the time to look at this. I would appreciate any feedback! :)
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
One thing to keep in mind is you don't want to fill your server past 80%. So that leaves you with 4.92TB and take a little away for 1000 to 1024 conversion. You are looking at 4.47TB usable. And also an optimal config for raidz2 is 6 disks.
 

Durandal

Explorer
Joined
Nov 18, 2013
Messages
54
If you choose the X10SLM-F-O instead of the X10SLL-F-O you get four SATA 6Gbit/s instead of two.
 
Joined
Feb 5, 2014
Messages
6
One thing to keep in mind is you don't want to fill your server past 80%. So that leaves you with 4.92TB and take a little away for 1000 to 1024 conversion. You are looking at 4.47TB usable. And also an optimal config for raidz2 is 6 disks.

That's something I hadn't considered, thanks. So 6x 3TB drives would yield ~9.375TB, which is still more than I think we really need, but is also not an outrageous amount of storage.

Correct me if I'm wrong, but isn't 4 disks in raidz2 also a "magic" number?

If you choose the X10SLM-F-O instead of the X10SLL-F-O you get four SATA 6Gbit/s instead of two.
Thanks. I looked at that board, but decided it's unlikely I'd ever saturate SATAII anyway. How do you like the i3 in your NAS?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
i3s can do 1(maybe 2) transcoded videos. If you think you'll be doing 2 or more regularly you should get the xeon. Better to be safe than sorry.
 

Durandal

Explorer
Joined
Nov 18, 2013
Messages
54
That's something I hadn't considered, thanks. So 6x 3TB drives would yield ~9.375TB, which is still more than I think we really need, but is also not an outrageous amount of storage.

Correct me if I'm wrong, but isn't 4 disks in raidz2 also a "magic" number?


Thanks. I looked at that board, but decided it's unlikely I'd ever saturate SATAII anyway. How do you like the i3 in your NAS?

It works very good so far. I'm trying to test how many can stream at the same time right now, and it seems it's no problems at all with 2 direct streaming (no transcoding) on the LAN and 1 transcode from the outside WAN. Can post more data when i have tried some more.
 

baummer

Dabbler
Joined
Feb 15, 2014
Messages
13
One thing to keep in mind is you don't want to fill your server past 80%. So that leaves you with 4.92TB and take a little away for 1000 to 1024 conversion. You are looking at 4.47TB usable. And also an optimal config for raidz2 is 6 disks.

Why is this?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
ZFS changes its allocation of files from performance to space. This means fragmentation will skyrocket. And since there's no defrag for ZFS ever expected, you'll have a pool that has data that is forever fragmented to hell.
 

baummer

Dabbler
Joined
Feb 15, 2014
Messages
13
ZFS changes its allocation of files from performance to space. This means fragmentation will skyrocket. And since there's no defrag for ZFS ever expected, you'll have a pool that has data that is forever fragmented to hell.

I wasn't aware of this. Are there tools to limit the amount of space used so as not to go over that 80%?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Not really. You could create a dataset and force it to never exceed 80%, but then you create new problems.

Best action is to properly admin your server.
 

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310
Go with the X10SL7-F (~$240 online). For about $50 more than the regular X10 boards, you get an on-board LSI 2308 SATA6 w/8 ports, plus you have the standard 6x SATA-ports (2x SATA6; 4x SATA3) that hang off the chipset. It also has two Intel i210 gigabit ports and 1 dedicated IPMI port. A PCIe LSI 2308 add-in card is about $200-$250. Any of the Xeon E312x0V3 processors will work great. I went with a E31240V3 since I wanted HT and 32GB of 1.35v ECC DDR3 uDIMMS.

RE: Raid2z disk numbers, check out this raid calculator. Going from 4 to 6 disks drastically increases the % of usable storage space. For some reason, 4 disk Raid2z setups waste a ton of space and you have hardly any waste/overhead with a 6-disk array (I think it has something to do with it being able to cleanly stripe the data blocks between the disks in nice round numbers).

http://www.servethehome.com/raid-calculator/
 

RyanG

Cadet
Joined
Feb 25, 2014
Messages
5
One thing to keep in mind is you don't want to fill your server past 80%. So that leaves you with 4.92TB and take a little away for 1000 to 1024 conversion. You are looking at 4.47TB usable. And also an optimal config for raidz2 is 6 disks.


Hmm, why couldn't you do something like this? http://everycity.co.uk/alasdair/2010/07/zfs-runs-really-slowly-when-free-disk-usage-goes-above-80/

More explanation: "The performance degradation occurs when your zpool is either very full or very fragmented. The reason for this is the mechanism of free block discovery employed with ZFS. Opposed to other file systems like NTFS or ext3, there is no block bitmap showing which blocks are occupied and which are free. Instead, ZFS divides your zvol into (usually 200) larger areas called "metaslabs" and stores AVL-trees1 of free block information (space map) in each metaslab. The balanced AVL tree allows for an efficient search for a block fitting the size of the request.
While this mechanism has been chosen for reasons of scale, unfortunately it also turned out to be a major pain when a high level of fragmentation and/or space utilization occurs. As soon as all metaslabs carry a significant amount of data, you get a large number of small areas of free blocks as opposed to a small numbers of large areas when the pool is empty. If ZFS then needs to allocate 2 MB of space, it starts reading and evaluating all metaslabs' space maps to either find a suitable block or a way to break up the 2 MB into smaller blocks. This of course takes some time. What is worse is the fact that it will cost a whole lot of I/O operations as ZFS would indeed read all space maps off the physical disks. For any of your writes.
The drop in performance might be significant. If you fancy pretty pictures, take a look at the blog post over at Delphix which has some numbers taken off an (oversimplified but yet valid) zfs pool. I am shamelessly stealing one of the graphs - look at the green, red and yellow lines in this graph which are representing pools at 93%, 75% and 50% capacity drawn against write throughput in KB/s while becoming fragmented over time:
A quick & dirty fix to this has traditionally been the metaslab debugging mode (just issue echo metaslab_debug/W1 | mdb -kw at run-time for instantly changing the setting). In this case, all space maps would be kept in the OS RAM, removing the requirement for excessive and expensive I/O on each write operation. Ultimately, this also means you need more memory, especially for large pools, so it is kind of a RAM for storage horse-trade. Your 10 TB pool probably will cost you 2-4 GB of memory2, but you will be able to drive it to 95% of utilization without much hassle. (http://serverfault.com/questions/51...to-keep-free-space-in-a-pool-or-a-file-system)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Someone is a pimp daddy and did some Googling! Good work!

However, I could be mistaken, but I believe that metaslab debugging mode doesn't work quite the same on FreeBSD as Solaris(which appears to be what that thread was discussing). Unfortunately, there's no decoder ring to figure out how crap is the same and/or different between Solaris and FreeBSD.
 

RyanG

Cadet
Joined
Feb 25, 2014
Messages
5
Lol, relax there dood. I was asking WHY, as in, I didn't know and was wondering if someone more knowledgable could step in and explain whether it's possible.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Lol, relax there dood. I was asking WHY, as in, I didn't know and was wondering if someone more knowledgable could step in and explain whether it's possible.

Relax? I thought that was an amazing post! Especially for a post count of 3!

If what I think I know is right for being 4am here storing metaslab data in RAM would be almost impossible without large amounts of RAM(think >32GB of RAM) for a decently sized pool. There's limits to how much RAM can be utilized for various portions of ARC(for example, file data uses up to 70%, metadata is 35%, etc.). Those values are tunables and may not be correct as I haven't checked them for recent versions of FreeNAS. I'm not sure what category the metaslabs fall into, or whether they override these limits with debug mode(assuming that debug mode applies to FreeBSD). There's lots of limits set to prevent starving other aspects of the ARC. If you are a BA you can override them, but it's basically at your own risk.

Anyway, I'm not sure what this is supposed to gain as you are trying to minimize fragmentation and the performance penalty associated with it. So even if you store the metaslabs in RAM, a nearly full pool will still end up fragmented and since there is no defrag tool for ZFS it is also very permanent.

The bottom line.. keep your pool <80% full. There's no getting around it, only getting through it with sufficient free space.
 
Joined
Feb 5, 2014
Messages
6
Go with the X10SL7-F (~$240 online). For about $50 more than the regular X10 boards, you get an on-board LSI 2308 SATA6 w/8 ports, plus you have the standard 6x SATA-ports (2x SATA6; 4x SATA3) that hang off the chipset. It also has two Intel i210 gigabit ports and 1 dedicated IPMI port. A PCIe LSI 2308 add-in card is about $200-$250. Any of the Xeon E312x0V3 processors will work great. I went with a E31240V3 since I wanted HT and 32GB of 1.35v ECC DDR3 uDIMMS.
That's pretty slick. Too bad I already bought the SLL-F-O, though! Thankfully, it'll be a long time before I need to push my storage capabilities beyond what I'm building.

RE: Raid2z disk numbers, check out this raid calculator. Going from 4 to 6 disks drastically increases the % of usable storage space. For some reason, 4 disk Raid2z setups waste a ton of space and you have hardly any waste/overhead with a 6-disk array (I think it has something to do with it being able to cleanly stripe the data blocks between the disks in nice round numbers).

http://www.servethehome.com/raid-calculator/

This is pretty nifty, too. Thanks!
 
Status
Not open for further replies.
Top