build check - 4 node system

Status
Not open for further replies.

mensch90

Cadet
Joined
May 24, 2016
Messages
7
Hello everyone,
im relatively new to freenas and planing on replacing an emc-graded nas (nfs) as a vm-storage since we get crappy usb-stick performance.
Our bandwidth throughput recommendations are not very high - at the moment a simple debian machine with 2 x 1 raid 1(mdadm) width a single gigabit nic satisfied our needs.

Im planning to build a supermicro 4 node system with an identical 4-node-backup-system together with zfs snapshot replication. In an event of a failure i would add the dedicated storage-ip to the backupsystem.
The reason for 4 nodes: splitting the workload onto 4 systems and even the chance of a failure will be splitted onto multiple nodes.

configuration provided by our distributor:
* System: Supermicro 4HE Chassis F424AS-R1K28B with Supermicro X10DRFR board
* CPU per node: 1 x INTEL Xeon E5-2603 V4 1700MHz 15M Cache 6Core
* MEM per node: 2 x 16GB (ECC Registered DDR4 2133)
* OS-STORAGE per node: 2 x Supermicro SATA DOM 32GB mirrored
* VM-STORAGE per node (node 1-3): 8 x 2TB WD Raid-Edition (WD2004FBYZ) - 2 x RAID10 - main data storage for filers etc.
* VM-STORAGE per node (node 4): INTEL SSD 535 Series 240GB 2.5in SATA 6Gb/s - 2 x RAID 10 - storage for databases (iops)
* ETHERNET per node: 2 x Intel® i350-AM2 (82574L) (onboard) - 1 x mgmnt-network, 1 x storage-network (since lcap doesn't double the bandwidth ;))

I think, i don't need any caching ssds or a higher amount of ram for these storage dimensions - am I right about this assumption?
Are there any pitfalls in this configuration?

Thank you in advance!
Kind regards
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Welcome to the Forums.

At first glance, the proposed system specs are really unusual as to what 'passes by the forums' as "check my build".
There are a lot going on in this build that might need attention. I'll try to start addressing some points that I find a bit odd.

Really, before going any further, a further description of the use case would be helpful to provide advices. (for example number of users, VM's, or whatever you can think of)

Notes on the hardware choices:
- The heavy CPU combined with very limited RAM is very unusual. Typically the reasons for E5 CPU's is to get above the 64GB RAM limitation on E3 systems.
-FreeNAS is definitely more RAM dependent than CPU hungry, by miles.

In spite of that, I'll be shooting somewhat in the dark:
- If you intend to continue using NFS for VM shares, you should probably do a search on the forums to find threads arguing in favor of iSCSI with regards to speed.
-When choosing iSCSI you'd also do a second search for RAM requirements, as well as having a read in the documentation. There is a highly enlightening thread I just read today I cannot find again (?!) on this topic.

The conclusion of the first glance at your build:
Probably super overkill CPU
Probably not enough RAM

Cheers /
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Eh, I just checked the chassis. I didn't realize it was <4u 4x machine> type of node.
What is the reasoning preventing you to get separated machines to account for failures that would affect the entire rack?
(Off site? other coolio?)

Cheers /
 
Last edited:

mensch90

Cadet
Joined
May 24, 2016
Messages
7
Welcome to the Forums.

At first glance, the proposed system specs are really unusual as to what 'passes by the forums' as "check my build".
There are a lot going on in this build that might need attention. I'll try to start addressing some points that I find a bit odd.

Really, before going any further, a further description of the use case would be helpful to provide advices. (for example number of users, VM's, or whatever you can think of)

Notes on the hardware choices:
- The heavy CPU combined with very limited RAM is very unusual. Typically the reasons for E5 CPU's is to get above the 64GB RAM limitation on E3 systems.
-FreeNAS is definitely more RAM dependent than CPU hungry, by miles.

In spite of that, I'll be shooting somewhat in the dark:
- If you intend to continue using NFS for VM shares, you should probably do a search on the forums to find threads arguing in favor of iSCSI with regards to speed.
-When choosing iSCSI you'd also do a second search for RAM requirements, as well as having a read in the documentation. There is a highly enlightening thread I just read today I cannot find again (?!) on this topic.

The conclusion of the first glance at your build:
Probably super overkill CPU
Probably not enough RAM

Cheers / Dice

Thank you for your insights.

We are using about 40 vm's with low performance tasks and about 10 with higher requirements (postgresql, ldap, mysql).

Heavy cpu: sadly the distributor can't deliver the system with a lower class cpu - this is already the smallest cpu choice for this configuration.
ISCI/NFS: for compatiblity and hypervisor choice (rhev > ovirt) we have been using nfs v3 for 3 years now and our selfmade debian nfs server always delivered full gigabit throughput.
RAM: i followed the guide to use 1gb RAM per 1 TB (8TB storage -> 32 GB RAM)

The main reason of splitting up the machines was the idea of redundancy in case of failure. If an error occurs, only a small fraction of the vm´s will be afflicted.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Alright.
The additional information indeed points requirements in different directions.

In case you've not come across the quote "ZFS lives and dies by its cache", it is time now. That's where RAM come into play. I doubt this system would perform to your liking with less than 64GB of ram <available to FreeNAS>.
The guidelines are geared towards home user environments. Your build and requirements are something completely different, and needs experience (that I do not have) to properly advice complete hardware requirements.

Some additional input on drive configuration that you might want to keep in mind:
Running ~40VM's would call for a mirrored drive setup. Raidz2 would give you the IOPS of the slowest drive in the vdev included in the zpool. I don't know, but doubt that you'd be able to run all these VM's from a single drive in terms of IOPS capacity. That said, you would be looking at a mirrored setup, or several pairs of mirrors in a single pool.

If you're lucky, the resident grinch @jgreco might shed some light on how to proceed.

Cheers /
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
FatTwin - bad. Two Twins - maybe OK.

E5-2603v4 - bad. E5-1650 v3 or E5-2637 v4 - good.

32GB RAM - bad. 64-128GB (or more) RAM - good. ZFS needs lots of RAM to identify useful ARC content.

2TB HDD - very bad. 4 or 6TB HDD - much better. The more free space you have on a pool, the faster writes will be.

RAID10 - very bad. Just hoping you meant "mirror vdevs".

If you cannot find an integrator who will sell you what you need, that is not an excuse to settle. Find one who'll sell you something good.

So if you want something you'll be *really* a lot more happy with, follow along here:

Three HDD based systems:

SC826BE1C-R920LPB with X10SRL, 2 x 32GB RDIMM, E5-1650 v3, Intel HHHL PCIe NVMe 750 400GB (SSDPEDMW400G4X1) for L2ARC (kinda optional), 2 32GB SATA DOM for boot.

8 x WD 6TB drives. WD Red is probably fine. WD Re if you must. Leaves 4 bays free to expand. You can stick a warm spare in one of those bays.

Leaves room for Chelsio T520-CR if you ever go 10G, or Intel HHHL SSD such as the 750 400GB or DC P3700 if you need SLOG.

One SSD based system:

SC216BE1C-R920LPB with X10SRL, 2 x 32GB RDIMM, E5-1650 v3, 2 32GB SATA DOM for boot.

Do not get anything smaller than the Intel 535 480GB SSD. The 240's etc are a waste. ZFS works better with free space. Give it some.

All of these systems would be excellent performers that are also reasonably expandable, all can be retrofitted to do 10Gbps, or 128GB RAM, or more space, or whatever, without wasting any existing parts.
 

mensch90

Cadet
Joined
May 24, 2016
Messages
7
Thank you for your real-life experiences. They are very useful for the right dimension of cpu, ram and hdd.
I will use 2 x raid1 vdevs in a zpool which can be compared to raid10 - correct?
My distributors configurator doesn't allow me to attach the disks directly to the mainboards 10 x satacontroller - they force me to use a hba. While reading a lot of trouble about keeping firmware and driver in sync for these hbas I wanted to avoid using one of them. After some research I would go with an "Avago SAS II HBA 9207-8i" card - is this the way to go?
 
Last edited:

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Where are you buying from where you can't even control where you plug in your own hard drives?? I'm confused by this "limitation".
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Oh. I had to go pull up the motherboard specs. I see.

Yeah it looks like you will have to have an HBA.

Seems like a really expensive build. Lol. Have you looked at just getting a TrueNAS system from ixsystems and getting HA capabilities?
 

mensch90

Cadet
Joined
May 24, 2016
Messages
7
I did some research in the forum and sadly there are no reliable distributors in europe (maybe in gb) or germany. The HA option isn't required - a second backup system will be in sync via snapshots every 30 minutes.
But which hba? The "Avago SAS II HBA 9207-8i" is told to be stable in case of firmware and driver support in freenas... for sure?
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Not sure about that particular model. @jgreco has a thread around here somewhere (probably a sticky in this forum) about LSI cards
 
Joined
Apr 9, 2015
Messages
1,258
I will use 2 x raid1 vdevs in a zpool which can be compared to raid10 - correct?


Kinda, in ZFS it is literally mirrors. You kinda have to take a good chunk of what you know about raid and then forget it. https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

What you will want to do is create a Pool of multiple mirrored vDev's. So a pair of mirrors would be as close to raid10 as you can get. The more spindles the better so instead of just a pair of mirrors I believe what jgreco is suggesting is eight drives with a total of four mirrored vDev's.


Based on a post elsewhere by jgreco the 9207-8i should be "fine" https://forums.freenas.org/index.php?threads/lsi-9207-8i-firmware-question.13062/
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oh, sorry, I meant to touch on the HBA. At this point the LSI 3008 based stuff is stable, and with SSD on one of the boxes, probably "the way to go". The Supermicro part would be AOC-S3008L-L8E while the generic LSI part is 9300-8i. I still prefer the 6Gbps stuff because there's so many more million hours on that stuff, but enough people have been using 12Gbps stuff for enough time it ought to be just dandy.

The "1C" versions of the chassis contain a 12Gbps expander backplane so it's also more future-proof that way, though of course this stuff is generally very compatible and you can mix 6 and 12 as long as you work out the cabling.
 
Status
Not open for further replies.
Top