BUILD Critique my proposed 32TB RAW storage build

Status
Not open for further replies.
Joined
Oct 29, 2015
Messages
3
Hi All,

I'm looking to upgrade my current FreeNAS server running in a HP Proliant Microserver Gen8. I'm running 4x 2TB WD Reds in Raidz configuration which is fast becoming not enough storage for my needs and I am conscious that I only have one drive of parity.

My plan for early in the new year is to build a new server containing more storage space with 2 hard drives of parity and then use the old server as a backup server to provide extra data resilience.

My current parts list is as follows:
Motherboard
ASUS P9D-M - Micro-ATX board, Dual Intel NICs and Management NIC
Open for suggestions, I know that the forum recommends supermicro boards. Does anyone have any experience with this board?

CPU
Intel Core i3 4170 3.7GHz
I have been looking at Xeon processors but I think these would be overkill for my needs considering I dont need to run any jails as I have a separate proxmox server which can be used if nessesary.

CPU Cooler
Cooler Master Hyper 212 EVO

RAM
Crucial 16GB kit (8GBx2) DDR3 PC3-12800 Unbuffered ECC 1.35V 1024Meg x 72
I'm not sure if I should go straight for 32GB or start off with 16GB and upgrade at a later date considering my storage capacity?

Case

Fractal Design R5
Can anyone suggest any better cases with hotswap drive support for 8 drives (or upgradable to support hotswap for 8 drives) which is not a rackmount case?

Storage
8x WD Red 4TB

HBA Card (already have laying around)
LSI 9211-8i
Will flash to IT mode

Boot Media (purchased)
2x Sandisk Cruzer Force 16GB
I plan on configuring these as mirrors at the install stage

Power Supply
Corsair RM450
Open to alternatives.

Overall I don't want to spend more than about £700 (Approx $1100) excluding hard drives.

Open to all suggestions on alternative hardware and other suggestions.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Motherboard
Normally we avoid "workstation" boards as they contain extra components like audio chips that do nothing but annoy FreeBSD. This one appears to lack them, so that's a point in its corner. Supermicro is still the standard so that you get out-of-band management.

CPU
Perfectly fine since you're not planning to run jails.

Cooler
Stock is usually sufficient but lower temps and less noise is always beneficial. Make sure there's enough ambient cooling for VRMs and motherboard components though since this is a tower-style.

RAM
Are you planning to serve NFS/iSCSI to your ProxMox box? If so, buy 32GB out of the gate; otherwise start with 16GB and upgrade if need be. There's also other considerations such as SLOG if you plan to host VMs here.

Case
Looks fine IMO. If you're after >8 drives you're probably into either "expensive Lian-Li" or "just get a rackmount" territory though.

Storage
Going with the "more drives" angle, 8 drives is less than optimal for RAIDZ2 as you lose some space to 4K alignment overhead. Is 10 drives possible for 8 data + 2 parity?

HBA
The current gold standard. Love it.

Boot Media
Sandisk is the favorite USB stick at the moment. Only upgrade beyond here would be SATA DOMs or small SSDs.

Power Supply
Starting to be on the low end considering potential spin-up current. Seasonic is the favorite brand currently but Corsair should also be OK. I'd upsize to ~550W personally, definitely if you go with a 10-drive setup and maybe even bigger.

Cheers and welcome!
 
Joined
Oct 29, 2015
Messages
3
@HoneyBadger thanks for your comments they are much appreciated.

Motherboard
I'm going to have a look at the various recommended supermicro motherboards.

RAM
NFS/iSCSI hadn't really crossed my mind as they are not something I use currently but something to consider in the future I suppose.

Case
I had a look at Lian-Li but thay seem expensive for what they are. Hotswap would be nice but considering this is a home build, taking the server offline to replace a drive is not a big deal I suppose.

Storage
If 8 drives is less than optimal, I could potentially look at 10 disks or maybe sacrificing some space and going for 4 data + 2 parity?

Power Supply
I will look at some larger wattage power supplies from Seasonic.

Thanks for your help.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
It's not optimal but it's not bad either, you'll just have a bigger ZFS overhead, that's all.

Yes, this. It'll just cost you a few more lost blocks.

Serious VM workloads generally require a whole different design than bulk storage, but if you're just planning to have a few "toy VMs" in Proxmox for non-critical services or learning, you can easily cut vdevs/datasets out of your RAIDZ2 and use an SLOG device to offset the poor write performance vs mirror vdevs.

Or just run async if you really aren't concerned with data stability on those VMs.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yes, this. It'll just cost you a few more lost blocks.

Serious VM workloads generally require a whole different design than bulk storage, but if you're just planning to have a few "toy VMs" in Proxmox for non-critical services or learning, you can easily cut vdevs/datasets out of your RAIDZ2 and use an SLOG device to offset the poor write performance vs mirror vdevs.

Or just run async if you really aren't concerned with data stability on those VMs.


But that does nothing to fix the read performance, which is still a problem with RAIDZ2. :P
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
But that does nothing to fix the read performance, which is still a problem with RAIDZ2. :p

Random read I/O on parity RAID is still garbage vs mirror vdevs, but the random write is more likely to be an issue. ARC can help offset it, and if the OP is just doing testing/learning I'm sure it will suffice for those needs.

Or he can just add a big 512GB L2ARC, that will solve it, right?* /sarcasm

* But really, don't do this, it's bad for various reasons.
 

PlowHouse

Dabbler
Joined
Dec 10, 2012
Messages
23
Just to comment on the case, you could try finding a cheap case with 8 - 5.25 bays and then purchase 8 - 5.25 hot swap enclosures separately. Each SATA/SAS enclosure will probably run you $15-$30 bucks depending on the kind you spec out, so even if you were given this type of case for free the cost of the bays start to get expensive...

You also could look to build your own hot swap back-plane like this guy did:
https://www.youtube.com/watch?v=KICINe72tiM
 
Joined
Oct 29, 2015
Messages
3
As @solarisguy mentioned above I did forget to mention what the main workload of the server will be.

In answer to that question:

Mainly storage of music, video and various files.

I mentioned proxmox in my original post, this server has its own RAID array and therefore the freeNAS server wont really have anything to do with proxmox other than storing some ISOs maybe.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Random read I/O on parity RAID is still garbage vs mirror vdevs, but the random write is more likely to be an issue. ARC can help offset it, and if the OP is just doing testing/learning I'm sure it will suffice for those needs.

Don't be so fast to expect it to perform. I tried it with a RAIDZ2. I couldn't run 2 VMs simultaneously without problems.. and those were only handling background tasks.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
As @solarisguy mentioned above I did forget to mention what the main workload of the server will be.

In answer to that question:

Mainly storage of music, video and various files.

I mentioned proxmox in my original post, this server has its own RAID array and therefore the freeNAS server wont really have anything to do with proxmox other than storing some ISOs maybe.

Beautiful; there's no need to concern yourself with mirror vdevs or random I/O performance to any great extent then, and 16GB of RAM should be good to start with.
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
Looks good for what your doing. If you are hosting vms , or need any kind of performance I've learned as many spindles as you can, mirrors, and more ram than you can even afford [emoji12]


Sent from my iPhone using Tapatalk
 

AgileLogic

Dabbler
Joined
Oct 20, 2015
Messages
20
Storage
Going with the "more drives" angle, 8 drives is less than optimal for RAIDZ2 as you lose some space to 4K alignment overhead. Is 10 drives possible for 8 data + 2 parity?

Question for those who know way more than I...

My understanding is if I'm using compression in ZFS, block size alignment becomes a non-issue, and thus sizing a RAIDZx volume based on that isn't necessary. A 4K block, once compressed, can turn out to be any size less than 4K and ZFS ends up writing the blocks non-aligned anyway.

Yes, I know in some contexts, compression isn't the best idea, so sizing on alignment (2^n+p) seems important.

True? False? Something in between?

Thanks.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
It's a non-issue for performance but you still lose some space.

LZ4 compression is a good idea in 99 % of cases because it doesn't compress already compressed data so there's a very low CPU overhead to using it on a mix of compressed/non-compresse data.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@AgileLogic

What you said is 100%. It really comes down to "how do I maximize performance".

Take these two scenarios:

1TB of data that is compressed with lz4 and gives a 5:1 compression ratio versus not using lz4 with the same data and the zpool doesn't follow the 2^n+p formula.

Reading data off the zpool with lz4 gives clear performance gains as you can read 80% less data from the zpool, and it decompresses to 5x what you read. So if you are running 40Gb LAN and your workload is strictly throughput based, lz4 is the very clear winner. So to saturate 40Gb LAN with lz4 at 5:1 ratio, your zpool needs to read at about 8Gb/sec (minimum).

If you have the same data without compression and you want to saturate 40Gb LAN, now your zpool must do a full 40Gb/sec (minimum).

Now you are, in theory, *always* paying the penalty for not being 'aligned', but the gains of lz4 are orders of magnitude better than the losses from a misaligned zpool.

Clearly you're waaaay better off with the lz4 zpool because you wouldn't want to see the pricetag for a 40Gb/sec throughput system compared to 8Gb/sec.

Now take these two scenarios:

1Tb of data compress with lz4 and gives 1.01:1 versus not using lz4 with the same data, but the zpool doesn't follow the 2^n+p formula.

Now you're gaining very little with lz4 (it is basically 1:1 ratio) but because you didn't follow the formula you're going to lose some of the optimization of your zpool. So for what is basically no gain you've also hurt performance because you *also* aren't aligned. So in this case it's a loss for lz4 an a loss for alignment. So your system is slower.

So now you're probably asking "where's the happy medium then"?

There is no way to quantify it as it depends on things such as "how much performance loss do I have with a misaligned zpool?" and "how much performance gain am I getting with lz4?" as well as things like your block sizes, etc.

For most of us, we don't get amazing lz4 compression, we also don't see a terrible performance penalty with a misaligned zpool because our bottleneck is probably 1Gb LAN. So a typical home system that is only modest can very likely saturate 1Gb, even with compression off and even with a misaligned zpool. The point where you have to ask yourself "to lz4 or not lz4" and "to align or not align" are pretty inconsequential to your typical home user and don't add much value (except potential disk space savings for lz4.

If you really wanted to have a hardcore answer as to how to maximize the performance, you'd have to test all sorts of block sizes, all the different compression algorithms, test various alignment options and figure out what provides the best for your workload...

Or you can do what the rest of us do and choose however many disks you want and use lz4 and he happy that your zpool can saturate 1Gb regardless.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
It all depends.

I was internally replicating data from pool RAIDZ2_6x4TB into pool RAIDZ2_8x6TB. To copy 11.9TB of data took 9 hours 25 minutes 24 seconds, giving the average speed of 369MB/s. There was no compression in either pool.

Code:
[root@freenas /]# date;zfs send RAIDZ2_6x4TB@copy | zfs receive -F -v RAIDZ2_8x6TB;date
Thu Nov  5 02:09:13 2015
receiving full stream of RAIDZ2_6x4TB@copy into RAIDZ2_8x6TB@copy
received 11.9TB stream in 33924 seconds (369MB/s)
Thu Nov  5 11:34:37 2015
[root@freenas /]#


Just for this post, I performed the following 4 tests (on video files from a dashboard camera). All of the tests were executed locally in the shell on an otherwise quiet system.
On RAIDZ2_6x4TB
dd if=21GBfile | cat > /dev/null ; 216MB/s
dd if=25GBfile | cat > /dev/null ; 214MB/s
On RAIDZ2_8x6TB
dd if=21GBfile | cat > /dev/null ; 214MB/s
dd if=25GBfile | cat > /dev/null ; 214MB/s

FreeNAS-9.3-STABLE-201511040813
Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
16GB RAM

P.S.
One might question why not uniformly 214MB/s... I think I have some answers, but that would be a different forthcoming post.
 
Status
Not open for further replies.
Top