Is this hardware good enough for small business use?

Dave Cox

Cadet
Joined
Sep 23, 2016
Messages
6
I found this server on eBay. I would like to run 7 4TB WD Red Pros with 1x ssd for read and 1x ssd for write caching. Plus an SSD to run FreeNAS. The server is only for storage. My compute is totally separate.

Processor: 2x Intel Xeon E5-2620 V3 - 2.4Ghz 6 Core
Memory: 64GB DDR4 (4 x 16GB - DDR4 - REG 2133 )
Hard Drives: None
Controller: 1x AOC-S3008L-L8e HBA 12Gb/s (8 Ports wire to Controller) and ( 4 slots wire to onboard SATA3 )

NIC: * Integrated Intel X540 Dual Port 10GBase-T and Dual 1GB NiC

Secondary Chassis/ Motherboard specs:
Supermicro 1U 12x 3.5" Drive Bays
Server Chassis/ Case: CSE-801L
Motherboard: X10DRL-iT
* Integrated IPMI 2.0 Management
Backplane:
PCI-Expansions slots: 1x Full Height PCI-e x8 slot
HD Caddies: 12x 3.5" Caddy
Power: 2x 600Watt Power Supply PWS-606P-1R Plantinum
 

Dave Cox

Cadet
Joined
Sep 23, 2016
Messages
6
Can anyone comment on the viability of this hardware? Everything from what I have been able to read checks out. The controller is compatible, it has ECC RAM and Intel NICs. The CPUs are Haswells so they are not super power hungry. I don’t think I’ve overlooked anything.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
If you're running for any business purpose, I'd mirror the boot pool. 12 CPU's seems like overkill for 7 x WD Red Pro's, I presume there's a Jail / Plugin / VM workload? Why the L2ARC (SSD read cache) vs. more RAM? You can stick 1Tb of RAM in that motherboard, and RAM is always faster than an L2ARC.
 

Dave Cox

Cadet
Joined
Sep 23, 2016
Messages
6
If you're running for any business purpose, I'd mirror the boot pool. 12 CPU's seems like overkill for 7 x WD Red Pro's, I presume there's a Jail / Plugin / VM workload? Why the L2ARC (SSD read cache) vs. more RAM? You can stick 1Tb of RAM in that motherboard, and RAM is always faster than an L2ARC.



Thank you for taking the time to answer my question. I do intend to mirror the server. No jail/vm workload. The servers only function is the ISCSI SAN for VMs ran on a different Compute server. I have 12 3.5 inch bays and a pci slot. So what about 5 WD Pros and 6 2TB SSDs for the ISCSI San and use the PCI for read cache and last 3.5 bay for write cache?

as for RAM VS L2ARC I had no reason for choosing L2ARC over RAM. I didn’t know using RAM was considered best practice.

If the CPUS are overkill I could try and find a used single processor option on eBay. I just haven’t seen many that are haswell or better.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
So what about 5 WD Pros and 6 2TB SSDs for the ISCSI San and use the PCI for read cache an
Where'd the 6x 2T SSDs come from?

They'll definitely be well suited for the iSCSI workload (use mirrors) and the WD Red Pro drives can be a RAIDZ2 (or Z1 if you're willing to take the risk acceptance) for bulk data.

The PCIe slot should be your "write cache" SLOG device - an Intel DC series or Optane device is your winner here, and the last 3.5" SAS/SATA bay for L2ARC "read cache."

The comments about RAM are correct - throw as much as you can afford in there first, because it's significantly faster than SSD.

There's more to dig into here, but I need to get to a full keyboard for that kind of stuff. One very real concern I'd like to raise though is that as soon as someone says the "B" or "C" words ("business" and "corporate") it's important to remember that you're generally into a different risk domain here, where downtime is measured in dollars rather than "irritated family members" - make sure to consider your build accordingly. A small amount of extra spend up front can prevent a massive loss later.
 

Dave Cox

Cadet
Joined
Sep 23, 2016
Messages
6
Where'd the 6x 2T SSDs come from?

They'll definitely be well suited for the iSCSI workload (use mirrors) and the WD Red Pro drives can be a RAIDZ2 (or Z1 if you're willing to take the risk acceptance) for bulk data.

The PCIe slot should be your "write cache" SLOG device - an Intel DC series or Optane device is your winner here, and the last 3.5" SAS/SATA bay for L2ARC "read cache."

The comments about RAM are correct - throw as much as you can afford in there first, because it's significantly faster than SSD.

There's more to dig into here, but I need to get to a full keyboard for that kind of stuff. One very real concern I'd like to raise though is that as soon as someone says the "B" or "C" words ("business" and "corporate") it's important to remember that you're generally into a different risk domain here, where downtime is measured in dollars rather than "irritated family members" - make sure to consider your build accordingly. A small amount of extra spend up front can prevent a massive loss later.

Thanks, so would you recommend just adding more RAM instead of an L2ARC and using the Remaining 3.5 inch drive for something else?

While the server is being used for a small business it’s mostly for home use. The business stuff is mostly for file storage and everything can be easily recovered via offsite storage if something dies.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Thanks, so would you recommend just adding more RAM instead of an L2ARC and using the Remaining 3.5 inch drive for something else?

While the server is being used for a small business it’s mostly for home use. The business stuff is mostly for file storage and everything can be easily recovered via offsite storage if something dies.

More RAM is generally preferred over L2ARC - RAM is an order of magnitude faster than SSD, after all - but there's usually a cost or slot limitation. In the case of your X10 board there's only four slots per CPU, so you either need to increase density by using (expensive) 32GB DIMMs or have two processors just for enabling the other four slots.

I'd consider using all twelve slots for storage - a 6-drive Z2 is a good combination of space efficiency and redundancy for your WD Red Pro drives, which is where I assume your file storage would be. The 6-drive SSD setup I assume will be used for the iSCSI/VM workload; you can set it up in mirrors as mentioned to get 6T of "usable space" and then carve some sparse ZVOLs out of it to let compression win you a bit more space. You could use a combination M.2 adaptor card to hold a pair of M.2 SATA SSDs for L2ARC for the spinning disks if needed (or mirrored boot SSDs) and then an M.2 NVMe SSD like the Optane P4801X 100G for your write cache.
 

Bikerchris

Patron
Joined
Mar 22, 2020
Messages
208
Thanks, so would you recommend just adding more RAM instead of an L2ARC and using the Remaining 3.5 inch drive for something else?
I am far from qualified to have my response trusted, but from what I've learned yes, RAM over L2ARC. Once you have thoroughly tested the hardware, then practiced a typical workload, it's a simple command to see if you need an L2ARC. Might only be a case of popping in more RAM if not already fully populated. As for spare 3.5 inch drives, never any harm in a hot spare, so that if a drive becomes an issue, it's ready to be replaced even if you're not around. I believe there is script that will automatically do it.

Only a thought, but if you find the CPU gets too hot or the over all heat is too great for the room it's in, via the board you might even be able to switch off a few cores.
 
Top