BUILD Build list for new server - what am I missing?

Status
Not open for further replies.
Joined
Dec 2, 2015
Messages
730
I'm putting together a parts list to build a new server to become my main FreeNAS server. The original one will be repurposed to be an off-site backup server. The new server will have 8 drives at the start (7 drives in RAIDZ 2 + a hot spare), but I'll add additional 7 drive vdevs as required in the future. The ultimate configuration would be 3 vdevs, with 7 drives each in RAIDZ2 + one or two hot spare disks (I travel quite a bit for one or more weeks at a time, and want to be able to replace failed disks remotely, if required). The server will be used for document and media storage, a backup server for 3 Macs, Plex server for one or two users, ownCloud server, and low traffic web server. I don't anticipate the server will see heavy usage.

My current server is running with a G3258, and the CPU usage is quite low, so I can start off with something of similar capability, as long as I could upgrade in the future when required.

I'd appreciate any advice from anyone with relevant experience or knowledge.

Motherboard: Supermicro X11SSM-F
CPU: G4400 for now, upgrading to E3-1220 v5 or E3-1240 v5 in the future, when required.
RAM: 32 GB (2 x 16 GB) Samsung M391A2K43BB1-CPB
Boot drive: Supermicro 16GB SATA DOM SSD-DM016-PHI
Storage: 8 x 4 TB WD Red (7 drives in RAIDZ2 + one hot spare)
PSU: SeaSonic Platinum SS-860XP2 860W (this should be good for up to 16 drives, but I'd need to upgrade it when I go beyond that)
Chassis: Norco RPC-4224

PCI Expansion Questions:
  1. The X11SSM-F provides 8 SATA ports. I'll eventually need to handle 16 to 24 drives. Can this board handle up to two M1015 HBA cards in the future? Or, is there some better method to handle future needs for more drives?
  2. I hope to someday move to 10 Gb networking, once switch prices come down some more, and I get a Mac that can support it without requiring expensive Thunderbolt to 10 Gb adapter. Can this board support a 10 Gb card + HBA card(s) to support up to 24 drives?
All advice or comments are appreciated.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The board has 2 x8 slots and 2 x4.

LSI 2008 uses x8 slots. But may work in the x4 at an avg disk speed of 250MB/s (reduced from 500)

This should be fine.

Alternatively you could add a SAS expander.

Modern 10gbe cards are normally happy with a 4x slot

You can also use a 4x nvme ssd in one of the slots.

I'm concerned, with the Sata dom, 7 drives and a spare, that's nine Satas. You have 8.

Have you seen my 4224 build report?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Boot drive: Supermicro 16GB SATA DOM SSD-DM016-PHI
Storage: 8 x 4 TB WD Red (7 drives in RAIDZ2 + one hot spare)
So what's the plan? Hire a kid to sit inside the server and very, very quickly switch between the DOM and one of the drives and hope that FreeNAS doesn't mind trying to run two devices from one SATA port? :p
The X11SSM-F provides 8 SATA ports. I'll eventually need to handle 16 to 24 drives. Can this board handle up to two M1015 HBA cards in the future?
Easily.
Or, is there some better method to handle future needs for more drives?
An expander is a neater solution, instead of mulitple HBAs.
I hope to someday move to 10 Gb networking, once switch prices come down some more,
Shouldn't be too long.
and I get a Mac that can support it without requiring expensive Thunderbolt to 10 Gb adapter
That's going to be a while.
Can this board support a 10 Gb card + HBA card(s) to support up to 24 drives?
Yes, easily.
 
Joined
Dec 2, 2015
Messages
730
So what's the plan? Hire a kid to sit inside the server and very, very quickly switch between the DOM and one of the drives and hope that FreeNAS doesn't mind trying to run two devices from one SATA port? :p
Hmm - the manual says the board supports 8 SATA ports and 2 SATA DOM ports, and interpreted that as meaning that the SATA DOM ports were in addition to the other eight ports. Now that I study it more closely, I note that the SATA DOM ports are ports 0 & 1, and that they are two of the eight ports. Drat.

Thanks for picking that up.

I guess that puts me back to using mirrored USB flash drives to boot from. How is that working on the X11 boards? Or, I chose another board, or use an HBA card right from the start. Comments appreciated.
 
Joined
Dec 2, 2015
Messages
730
The board has 2 x8 slots and 2 x4.

LSI 2008 uses x8 slots. But may work in the x4 at an avg disk speed of 250MB/s (reduced from 500)

This should be fine.

Alternatively you could add a SAS expander.

Modern 10gbe cards are normally happy with a 4x slot

You can also use a 4x nvme ssd in one of the slots.

I'm concerned, with the Sata dom, 7 drives and a spare, that's nine Satas. You have 8.

Have you seen my 4224 build report?
Thanks for the comments.

Yeah, I did see your build report, and it was the main thing that pushed me over the edge to considering the Norco chassis, rather than waiting for a used Supermicro SC846 in a suitable configuration to become available. The only SC846 stuff I can find in a price I like has SAS1 backplane.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Default fans on a Norco are non-pwm, and are not high quality silent fans.

So, if noise is a concern, then you need to budget to replace them.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I guess that puts me back to using mirrored USB flash drives to boot from. How is that working on the X11 boards? Or, I chose another board, or use an HBA card right from the start. Comments appreciated.

It works fine with 9.10+

I'd go with the USBs at the moment. You can then get your 2 reverse breakout cables and test all 24 bays in your chassis without getting an HBA.
 
Joined
Dec 2, 2015
Messages
730
Default fans on a Norco are non-pwm, and are not high quality silent fans.

So, if noise is a concern, then you need to budget to replace them.
I was aware of that, and do plan to buy a 120mm fan bulkhead and three 120mm PWM fans. I'll try your fan control script.

In the future, when I have the case fully populated with drives, I may need to switch to four 80mm PWM fans. It will be living in our basement, which is quite cool, so that should help with HD temps.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I was aware of that, and do plan to buy a 120mm fan bulkhead and three 120mm PWM fans. I'll try your fan control script.

In the future, when I have the case fully populated with drives, I may need to switch to four 80mm PWM fans. It will be living in our basement, which is quite cool, so that should help with HD temps.

I think the case comes with the 120mm bulkhead these days. At least mine did.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Joined
Dec 2, 2015
Messages
730
The latter option is probably the best. Look at the X11SSL-CF.
I did consider the X11SSL-CF, but it is double the price of the X11SSM-F. The X11SSL-CF does add an LSI 3008 chip, but even with those extra SAS ports it still doesn't support enough drives to cover my plans once I add the second vdev. Thus the extra cost didn't seem to make sense for me.

My original server does have extra SATA ports, so perhaps I'll switch to a boot SSD on it, and repurpose its mirrored USB boot drives for the new server. The original server will be moved off-site, so anything I can do to improve its reliability is even more important than on the new main server.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I'd approach this by thinking about size of drives in relation to RAM capacity and check if the platform is capable to deliver to expectations.

When adding the second and third vdev, you'd want to have enough RAM to comfortably support the storage and other functions.
In other words - the X11 series MAX at 64GB put some restraints on the size of applicable drives one might argue.
Ie, on 8 drive wide raidz2 starting off at 4TB drives, you'd look at a 32TB RAW.
Guestimate the next vdev to consist of 6TB drives and the final third vdev consisting of 8TB Drives, you'd end up at a total of 32+48+64=144TB RAW.
Would you be comfortable running 2.5TB/GB of ram? That is approx 2.5:1 the 'bendable rule of thumb'. I recon this is probably touching the outer borders....
For a longer life span of the box, if it was my build, I'd definitely look into upgrading to the E5 platform.

Here are some suggestions:
X10SRL-F for the 'cheap version' ..still having 10 sata on board.
or X10SRH-CF for the "fully loaded version" with LSI3008
Then top it off with the Intel Xeon E5-1620v4.

Imensly more powerful, overkill and all of that - yes.
But they solve the RAM issue and provide additional PCIe slots while setting you up for a really neat upgrade when CPU's are discontinued from data centers in approx 3 years and turning up on e-bay to killer prices.

That's what I'd look into doing. For the long term.
 
Joined
Dec 2, 2015
Messages
730
I'd approach this by thinking about size of drives in relation to RAM capacity and check if the platform is capable to deliver to expectations.

When adding the second and third vdev, you'd want to have enough RAM to comfortably support the storage and other functions.
In other words - the X11 series MAX at 64GB put some restraints on the size of applicable drives one might argue.
Ie, on 8 drive wide raidz2 starting off at 4TB drives, you'd look at a 32TB RAW.
Guestimate the next vdev to consist of 6TB drives and the final third vdev consisting of 8TB Drives, you'd end up at a total of 32+48+64=144TB RAW.
Would you be comfortable running 2.5TB/GB of ram? That is approx 2.5:1 the 'bendable rule of thumb'. I recon this is probably touching the outer borders....
For a longer life span of the box, if it was my build, I'd definitely look into upgrading to the E5 platform.

Here are some suggestions:
X10SRL-F for the 'cheap version' ..still having 10 sata on board.
or X10SRH-CF for the "fully loaded version" with LSI3008
Then top it off with the Intel Xeon E5-1620v4.

Imensly more powerful, overkill and all of that - yes.
But they solve the RAM issue and provide additional PCIe slots while setting you up for a really neat upgrade when CPU's are discontinued from data centers in approx 3 years and turning up on e-bay to killer prices.

That's what I'd look into doing. For the long term.

@Dice - thanks for your thoughts. You've brought up an interesting perspective that I hadn't considered up til now. I hadn't considered that future vdevs may very likely have larger disk sizes, but that is quite possibly true. My vdevs will be seven disks each, to leave slots free for one or two spare disks, but your basis point is still quite valid.

The RAM vs storage rule of thumb is very vague, as our friendly Grinch often points out. It isn't at all clear whether it refers to raw storage space, usable storage space after subtracting space lost to parity, or space used by stored data. My load is quite low, but it is hard to predict what percentage of the RAM will be used to support jails.

I must admit I am a bit spooked by the large power numbers thrown around by some posters with large storage arrays. But it isn't clear how much of that is spent spinning the disks, how much is spent on cooling fans, and how much by motherboard, CPU and RAM. My current system, with six 4TB disks, idles at about 62W. I was hoping to keep the first iteration of this new system, with eight 4TB disks, to less than 100W.

I haven't ordered anything yet, so I'll ponder this some more. I'm still hoping I can find a decent used SC846 chassis, but that supply seems to have strangely dried up. Odds are the market will be flooded with them again the day after my Norco-based system is assembled.
 
Joined
Dec 2, 2015
Messages
730
Joined
Dec 2, 2015
Messages
730
I've realized that my originally planned motherboard and CPU, while they would be adequate for the initial configuration with 8 disks, would probably become swamped once I started adding additional vdevs in the future. In particular, I would likely eventually wish for more than 64GB RAM. I wondered whether it made sense to simply buy a more capable MB, CPU, RAM, etc now, accepting the additional investment and power consumption, or whether to start small and upgrade the MB, CPU, etc later.

I looked at the acquisition cost of the two options, and the estimated difference in power consumption. I assumed that I would outgrow the single vdev system in roughly two years time, and that I would sell the used motherboard, CPU and RAM. I used the known power consumption from my current system, some best guesses on the higher power system provided by @Dice in another thread and my current cost of electricity.

The original small system, with X11SSM-F board and G4400 CPU, is estimated to consume 75W or less at idle. I assumed that I could sell the used MB, CPU and RAM for 50% of the purchase price. The breakeven point is if the higher power system would consume 205W. If it consumes less than that, it is cheaper to simply purchase it now. If it consumes more than 205W, it is cheaper to go small now, and upgrade later. If I assume I only recoup one third of the original purchase price when I see the MB, CPU & RAM, the breakeven point becomes 240W.

My proposed more capable system (X10SRL-F, E5-1650 v4 + 32GB RAM) would certainly consume less than 200W when spinning only 8 disks.

Conclusion - it makes sense to buy the more capable system now, rather than go through the pain of upgrading in two years.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I've realized that my originally planned motherboard and CPU, while they would be adequate for the initial configuration with 8 disks, would probably become swamped once I started adding additional vdevs in the future. In particular, I would likely eventually wish for more than 64GB RAM. I wondered whether it made sense to simply buy a more capable MB, CPU, RAM, etc now, accepting the additional investment and power consumption, or whether to start small and upgrade the MB, CPU, etc later.

I looked at the acquisition cost of the two options, and the estimated difference in power consumption. I assumed that I would outgrow the single vdev system in roughly two years time, and that I would sell the used motherboard, CPU and RAM. I used the known power consumption from my current system, some best guesses on the higher power system provided by @Dice in another thread and my current cost of electricity.

The original small system, with X11SSM-F board and G4400 CPU, is estimated to consume 75W or less at idle. I assumed that I could sell the used MB, CPU and RAM for 50% of the purchase price. The breakeven point is if the higher power system would consume 205W. If it consumes less than that, it is cheaper to simply purchase it now. If it consumes more than 205W, it is cheaper to go small now, and upgrade later. If I assume I only recoup one third of the original purchase price when I see the MB, CPU & RAM, the breakeven point becomes 240W.

My proposed more capable system (X10SRL-F, E5-1650 v4 + 32GB RAM) would certainly consume less than 200W when spinning only 8 disks.

Conclusion - it makes sense to buy the more capable system now, rather than go through the pain of upgrading in two years.
Certainly, buy what you feel you need, but keep in mind that moving your existing disks and installation of FreeNAS over to a new system board is relatively painless. I have migrated to new hardware four times and only ever reinstalled FreeNAS when I wanted to have new boot media.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I'd approach this by thinking about size of drives in relation to RAM capacity and check if the platform is capable to deliver to expectations.

When adding the second and third vdev, you'd want to have enough RAM to comfortably support the storage and other functions.
In other words - the X11 series MAX at 64GB put some restraints on the size of applicable drives one might argue.
Ie, on 8 drive wide raidz2 starting off at 4TB drives, you'd look at a 32TB RAW.
Guestimate the next vdev to consist of 6TB drives and the final third vdev consisting of 8TB Drives, you'd end up at a total of 32+48+64=144TB RAW.
Would you be comfortable running 2.5TB/GB of ram? That is approx 2.5:1 the 'bendable rule of thumb'. I recon this is probably touching the outer borders....
For a longer life span of the box, if it was my build, I'd definitely look into upgrading to the E5 platform.

Here are some suggestions:
X10SRL-F for the 'cheap version' ..still having 10 sata on board.
or X10SRH-CF for the "fully loaded version" with LSI3008
Then top it off with the Intel Xeon E5-1620v4.

Imensly more powerful, overkill and all of that - yes.
But they solve the RAM issue and provide additional PCIe slots while setting you up for a really neat upgrade when CPU's are discontinued from data centers in approx 3 years and turning up on e-bay to killer prices.

That's what I'd look into doing. For the long term.
Something you must keep in mind it the use of the system. The RAM does not need to be equal to the RAW storage capacity, only equal to the usable storage capacity. Additionally, the amount of RAM that is really needed depends on the number of transactions and the frequency with which the same data is accessed. If different data is being read, the cashing that RAM provides will be of no use and it will just churn away constantly replacing the data in cashe with what was just used. If you only have a few users and they are constantly looking at different files, there really is no need for a massive RAM cashe.
I would also suggest staying away from the massive drives. For the same reason that RAID-Z1 is not recommended with drives larger than 1 TB. I would not build an array with drives larger than 4 TB at this time. Better to have many vdevs with many 4 TB drives than one vdev with 6, 8 or 10 TB drives.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
he RAM does not need to be equal to the RAW storage capacity, only equal to the usable storage capacity.
That's not true, either.

It's a rule of thumb. It's deliberately vague and should only serve as general guidance, certainly not as some dogmatic imposition.
I would also suggest staying away from the massive drives. For the same reason that RAID-Z1 is not recommended with drives larger than 1 TB.
That doesn't make sense. Drive reliability is similar, when viewed as an integral of errors or failures over time.
If different data is being read, the cashing that RAM provides will be of no use and it will just churn away constantly replacing the data in cashe with what was just used. If you only have a few users and they are constantly looking at different files, there really is no need for a massive RAM cashe.
I wouldn't say that. The vast majority of scenarios will benefit from more ARC/L2ARC, assuming that the hit rate is low-ish. ZFS is smart enough to rationally manage ARC, even in odd scenarios.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
That's not true, either.

It's a rule of thumb. It's deliberately vague and should only serve as general guidance, certainly not as some dogmatic imposition.
By saying, "That's not true," you are indicating that I am WRONG, but as you say, it is a rule of thumb, so how can it be wrong? I base my statement on my personal experience with the FreeNAS builds that I have used in my home network. For example, the one that I use for backups has 16 GB of RAM but never uses more GB than the number of TB that is shared to the network; around 12 TB. The available RAM is never fully utilized. The other FreeNAS has 32 GB and is used to trans-code video for PLEX and host a VirtualBox in addition to a 12 TB network share. It uses every scrap of memory but performance is fine, so does it really need more memory. My point was that it depends on your use and you should not spend money on memory that you don't need. The place I work recently bought some new workstations (single user systems) with 24 TB of hard drive space and 256 GB of RAM. TOTAL over kill and these systems will never be fully utilized. If you don't need it (you really need to think about that) you shouldn't spend the money. The other thing about it is that the cost of hardware will come down over time. so buy what you need now and plan to buy more later once the price comes down.
That doesn't make sense. Drive reliability is similar, when viewed as an integral of errors or failures over time.
I have an abundance of caution when it comes to storage because of some past data loss. My point with regard to really large drives (6 TB and up) is that a drive with 5 TB of data on it will take a long time to resilver. Right now, when the drives are young and healthy, that may be no issue but what about 4 to 6 years from now when the drives start to fail. The heavy workload of sustained reads in the resilver puts all the other drives in the vdev under stress for the duration of that resilver and you could suffer a second failure before the resilver was completed. With a RAID-Z2 vdev, that isn't so bad but it puts you in position where there is no redundancy. So, you need to replace a second disk in the same group and what if you have a third failure before the first resilver is finished. One of the things I like about smaller disks is that they resilver so quickly and get me back to a stable state.
If large drives are your thing, go for it, to each their own. You might want to think about it though. The organization that I work for just bought a shiny new SAN from Sun/Oracle (what ever their name is now) and they (Sun) put 14 shelves of drives per rack with 14 drives in each shelf, so that works out to 560 drives, and they are only 300 GB each. I am thinking that thy must have a reason for using small drives instead of just a few of these new 10 TB monsters because I assure you, price and power consumption were not concerns. We had to have the facility power upgraded to be able to connect this and our new virtualization cluster.
I would take photos, but (unlike Hillary) I am not immune to prosecution.
 
Status
Not open for further replies.
Top