60TB+ New Build Need Hardware check

Status
Not open for further replies.

iHeartMacs

Explorer
Joined
Oct 5, 2017
Messages
56
Hi, I've been buying freenas's like they are candy and finally thought I'd build one of my own. I saw the general Hardware list and put together this build. I realize the list might be outdated and was hoping for some advice.

The purpose of this Build is to provide dedicated storage for Time Machine backups and Windows PC backups. I know how to create the shares and use Freenas so no need to comment on that unless it's hardware related and could impact the ability to backup.

Money is not a concern but I just want this to work solid. Not fast or overkill. So if I'm going to big or too small let me know. :)

Case: Supermicro 4U Rackmount Server Chassis - Black CSE-846BE16-R1K28B

Power: This case comes with power

Boot Drives: SanDisk Ultra Fit CZ43 x 2 for mirrored boot

MOBO:
Supermicro Motherboard Micro ATX DDR4 LGA 1151 X11SSL-CF-O

HDD: *14 of them
WD Red 6TB NAS Hard Disk Drive - 5400 RPM Class SATA 6 Gb/s 64MB Cache 3.5 Inch - WD60EFRX

SSD: *Read Cache and Write Cache
Any 120gb Intel, Sandisk, Samsung SSD?

CPU: (does this work? Do I really need a Xeon?)
Intel BX80662G4400 Pentium Processor G4400 3.3 GHz FCLGA1151

USPS: APC Smart-UPS 1500VA UPS Battery Backup with Pure Sine Wave Output Rack-Mount/Tower (SMC1500-2U)

Anything I'm missing please let me know. I like the size of the 4U and I hear that they are quieter and Run cooler. The server room will have AC.
 

iHeartMacs

Explorer
Joined
Oct 5, 2017
Messages
56
Oh and this.
RAM: 4 of these:
Supermicro Certified 16gb DDR4-2133 LP ECC MEM-DR416L-Sl01-Er21
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
SSD: *Read Cache and Write Cache
Any 120gb Intel, Sandisk, Samsung SSD?
I don't know the intricacies of how Time Machine works, but I don't think having a cache drive will be of any use to you and it might even make the total performance less.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
CPU: (does this work? Do I really need a Xeon?)
Intel BX80662G4400 Pentium Processor G4400 3.3 GHz FCLGA1151
If you are only going to use this for backups, then the Pentium would be plenty sufficient. But if you do intend to change you requirements in the future then you can always upgrade the processor at that time. (same socket, of course)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
HDD: *14 of them
As @Chris Moore says, it'd probably be better to go in increments of either 6 or 8, so either 12 or 16 disks. Unless you were planning on a 12-disk pool, and planning to buy two spares at the outset as well (which would be great planning, but not something I see here very often).
 

iHeartMacs

Explorer
Joined
Oct 5, 2017
Messages
56
No. You should read up on SLOG and L2ARC before even thinking of adding either.


You might want to consider a single SSD instead.


Is that on the QVL?
1. Oh wow. Ok never heard of those. Just thought the ssd drives were needed for speed. I should mention that there will be about 10 computers backing up to this machine. I know I need to limit space on the drives with Time Machine for the Macs. But can't really adjust the backup schedule as Apple makes this difficult to do even with 3rd party software. So that being said there might be 6 Macs trying to backup at the same time. So no read/write SSDs? (said without looking up SLOG and L2ARC)

2. I thought single SSD would be great but I wanted to have a mirrored boot drive. So 2 Single SSD's then? Also how does the case connect to the MOBO and support all 14 drives, though just one SATA cable?

3. Jeez had no idea about QVL! Thanks for helping me dodge a bullet on that one.

*UPDATE: RAM-
Supermicro Certified MEM-DR416L-SL01-EU21 Samsung 16GB DDR4-2133 ECC Un-Buffer LP Server Memory x 4 sticks
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Skip the SLOG and L2ARC they are not needed until you can explain how they work and then you will know if you need one.

Your motherboard has a LSI3008 controller so you can flash that to IT mode and use a reverse breakout cable to connect it to your backplane. You can also use a HBA of some kind if you want a separate device.
 

iHeartMacs

Explorer
Joined
Oct 5, 2017
Messages
56
As @Chris Moore says, it'd probably be better to go in increments of either 6 or 8, so either 12 or 16 disks. Unless you were planning on a 12-disk pool, and planning to buy two spares at the outset as well (which would be great planning, but not something I see here very often).

Hi, Selecting 14 drives was decided by how many the case can fit and mostly the amount of storage I wanted. I thought that I would have 12 drives for storage and 2 for pairity I've been reading about not using Raidz for ZFS because of performance hits if a drive fails. If data is critical (I should go enterprise. LOL) but for this build if data is critical should I be using mirrored vdevs as this article explains. I'm a noob about ZFS so thanks for the help.

http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

If this article makes sense how would I use all 14 disks that the case is capable of holding, while using mirrored vdevs? Would I not use a big pool and keep the pools to just two disks so that would be 7 pools? Total noob here.
 

iHeartMacs

Explorer
Joined
Oct 5, 2017
Messages
56
Skip the SLOG and L2ARC they are not needed until you can explain how they work and then you will know if you need one.

Your motherboard has a LSI3008 controller so you can flash that to IT mode and use a reverse breakout cable to connect it to your backplane. You can also use a HBA of some kind if you want a separate device.

1. SLOG or ZIL the only one to consider in a mainly backup storage solution, is a write cache device to help performance with asynchronous write scenarios. Question: Does having mirrored vdevs meet that scenario? Or is that a totally different protocol?

2. Ok so reverse breakout cable makes one SATA turn into 4. I would need 3 of these for a 12 drive array and just use the MOBO SATA connections for the other 2 drives if I want to use 14 drives.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
no a reverse breakout turns 4 sata into a single sas connection.

and again you don't need a slog. You will not be doing any synchronous writes with your system. You said asynchronous which is wrong. If it's asynchronous you don't need a cache because you can write and just forget about it. Synchronous writes means you have to wait for it to actually be written.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
If this article makes sense how would I use all 14 disks that the case is capable of holding, while using mirrored vdevs? Would I not use a big pool and keep the pools to just two disks so that would be 7 pools? Total noob here.
No you don't want to have 7 pools. You want to have 1 pool with multiple vdevs.

Mirrored vdevs are great because you have 2 drives in each vdev. Having multiple vdevs will increase your IOPS. But for just a backup solution that you need, I doubt that is useful for you. Another advantage is that you only have to buy 2 drives if you want to add a new vdev.
A disadvantage is that you only get 50% of the space since 1 drive in each vdev is lost for parity(kind of).

However if you are going to buy all 14 disks that your case can contain at the same time, RAIDZ2 will be a better option. You can create 2 vdevs of 6 drives each in a RAIDZ2 configuration. And the remaining 2 drives could be your SSD boot pool in a mirror. This will give you more usable storage space compared to a mirror, since you will lose only 2 drives for parity in each vdev.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hi, Selecting 14 drives was decided by how many the case can fit and mostly the amount of storage I wanted. I thought that I would have 12 drives for storage and 2 for pairity I've been reading about not using Raidz for ZFS because of performance hits if a drive fails. If data is critical (I should go enterprise. LOL) but for this build if data is critical should I be using mirrored vdevs as this article explains. I'm a noob about ZFS so thanks for the help.

If this article makes sense how would I use all 14 disks that the case is capable of holding, while using mirrored vdevs? Would I not use a big pool and keep the pools to just two disks so that would be 7 pools? Total noob here.
You are using this system for a backup target, that means you want to maximize storage, so RAID-z2 is your best bang for the buck. If you use mirrors, you waste a high percentage of your storage capacity on the mirror drive when you only need a couple of drives for redundancy. The decision to use mirrors should be based on the work the server is doing, not on the opinion of some random article you read. For that person, doing what they were doing, mirrors made sense. For what you are doing, it doesn't. If you are worried about responsiveness to multiple clients, increase the RAM. All writes are cashed to RAM first using ARC (Adaptive Replacement Cache) and flushed to the disks at the speed of the array. If you setup a RAID-z2 array properly, you would be able to write to it at the speed of 10GB Ethernet and never have a slowdown. It is about having the right combination of storage and speed.
If you were to use the 6 TB drives you originally suggested in 2 vdevs (in one pool) that would be 12 drives and it would provide about 33 TB of practical, usable storage. Take that same number of drives (same size) and do them in mirrors, you only get 25 TB.
Now, theoretically, the array of mirrors would be able to transfer data at 20GB Ethernet speed, but are you using 20GB ethernet?
The RAID-z2 array would only be able to do about half the speed of 10GB ethernet, but if your clients are connected at 1GB each and you have five clients accessing the pool simultaneously, your still golden. Your pool can write fast enough to keep up with your clients.
Are you going to have a switch with a 10GB connection to the server? I don't remember seeing that in your parts list.
In either configuration, the server will be much faster that a 1GB ethernet connection. The use case for mirrors is not mass storage, it is when the storage needs to be FAST.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Your motherboard has a LSI3008 controller so you can flash that to IT mode and use a reverse breakout cable to connect it to your backplane
Nope, the X11s use SFF-8643 instead of individual SATA-style connectors, so he just needs an SFF-8643 to SFF-8087 cable (or several).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
SLOG or ZIL the only one to consider in a mainly backup storage solution, is a write cache device to help performance with asynchronous write scenarios.
ZFS doesn't do "write cache" on a separate device. See here for more information. It allows you to put the log there, which speeds up synchronous writes; it has no effect on async writes.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
Selecting 14 drives was decided by how many the case can fit
The case fits 24 disks, not 14. But you should not put that many (even 14, much less 24) into a single vdev; vdevs really shouldn't have more than 10 disks each. Twelve disks would be set up in two, six-disk vdevs; sixteen would be in two, eight-disk vdevs.
 

iHeartMacs

Explorer
Joined
Oct 5, 2017
Messages
56
The decision to use mirrors should be based on the work the server is doing, not on the opinion of some random article you read.
Can't argue with that!

If you were to use the 6 TB drives you originally suggested in 2 vdevs (in one pool) that would be 12 drives and it would provide about 33 TB of practical, usable storage.

Thank you. That's what I'll do then.

Now, theoretically, the array of mirrors would be able to transfer data at 20GB Ethernet speed, but are you using 20GB ethernet?
The RAID-z2 array would only be able to do about half the speed of 10GB ethernet, but if your clients are connected at 1GB each and you have five clients accessing the pool simultaneously, your still golden. Your pool can write fast enough to keep up with your clients.
Are you going to have a switch with a 10GB connection to the server? I don't remember seeing that in your parts list.

I have a about 10 wired on 1GB and 4-5 on Wireless. The switch is a Cisco 200 series Smart Switch that has worked well.

In either configuration, the server will be much faster that a 1GB ethernet connection. The use case for mirrors is not mass storage, it is when the storage needs to be FAST.

I'll get a 10GB switch just to take advantage of that.

Thank you for the helpful insight.
 

iHeartMacs

Explorer
Joined
Oct 5, 2017
Messages
56
The case fits 24 disks, not 14. But you should not put that many (even 14, much less 24) into a single vdev; vdevs really shouldn't have more than 10 disks each. Twelve disks would be set up in two, six-disk vdevs; sixteen would be in two, eight-disk vdevs.

I get it know after you posted something else that made me have that "ah-ha" moment I needed. Thank you very much for you helpful posts.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'll get a 10GB switch just to take advantage of that.
A switch like this would probably do great for you: https://www.neweggbusiness.com/product/product.aspx?item=9b-33-150-165
You might be able to fine one a little less expensive, but this one has four SFP+ ports and all you need to do is put a Chelsio SFP+ adapter in the NAS. Then all the 1GB clients could access the NAS at full speed and you have three more ports for 10GB connections if you have certain clients or other servers that need the speed.
 
Status
Not open for further replies.
Top