Will this setup be optimal?

Status
Not open for further replies.

bmoreitdan

Dabbler
Joined
Oct 16, 2018
Messages
30
I'm looking for community input on this setup.

Hardware
Server: Dell R620
CPU: 2x E5-2670
Mem: 32GB
HBA: Dell HBA330
Network: 2x 10Gb SFP+, 2x 1Gbe
Disks 1-4: Samsung Evo 500GB SSD
Disks 5-8: Samsung Evo 1TB SSD

Software
FreeNAS 11.2 RC2

Configuration
10Gb SFP+ connections in LACP for use with iSCSI and NFS
1Gbe for web management
Disks 1-4 in mirrored pair (net 1TB usable) for iSCSI to run VM operating systems. This will also host MySQL database within some OS.
Disks 5-8 in mirrored pair (net 2TB usable) for NFS for shared data to be mounted on OS. This is for use with web hosting, so lots of small file reads and writes.

Notes
No ZIL, SLOG, or L2ARC at this time. Should I get one for this configuration?

Installation
This server will be installed in a large datacenter with backup battery services. In the 5 years I've been in this DC, they haven't experienced a any power outages.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm looking for community input on this setup.

Hardware
Server: Dell R620
CPU: 2x E5-2670
Mem: 32GB
HBA: Dell HBA330
Network: 2x 10Gb SFP+, 2x 1Gbe
Disks 1-4: Samsung Evo 500GB SSD
Disks 5-8: Samsung Evo 1TB SSD

Software
FreeNAS 11.2 RC2
.
You asked about optimal, that would imply that it's the best way to go, so I can only say NO, this is not optimal.
While it might do whatever it is that you are looking to do, optimization would require a more detailed analysis of the tasks that you are looking at.


Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. The RC (Release Candidate) is not the version of FreeNAS that I would suggest you use. FreeNAS 11.1-U6 is stable where the 11.2 is still a little buggy.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Just with your description of the plan, I would suggest a redesign. Have you already purchased the system or is there still time to adjust the configuration?
How many drives could be installed in the chassis?
Is there a budget limit that is constraining the configuration?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

bmoreitdan

Dabbler
Joined
Oct 16, 2018
Messages
30
@Chris Moore - Thank you for your feedback. The chassis can hold 10 drives. I intend to use this chassis for both vm storage and shared storage (which must be over NFS). I have already purchased parts but open to redesign. The budget is flexible.

I'm very interested in knowing more about your opinion of the system. If you were to redesign it, how would you build it?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The reason for the suggestion is for greater IOPS which is important both for the VMs and for the database. You should use as many disks as possible, potentially even adding an expansion shelf. More disks is roughly equivalent to more IOPS up to the limit of the SAS controller. You still want to have them in mirrored pairs to protect against disk failure, but with ten drive bays, you could have 5 mirrored pairs, each mirror set is a vdev (virtual device) all in a single pool. This would more than double the IOPS of having just two vdevs per pool. If you use all 1TB SSDs (need to be a good quality drive like Intel DC series) this would give you more total capacity also. You can split the storage logically using datasets and even set limits on how much storage each dataset is allowed to consume. This system because of the iSCSI and the NFS is going to need a SLOG device (NVMe SSD) in a PCI slot. I use an Intel 3D XPoint DC P4800X 375GB, NVMe PCIe 3.0 drive in one of the servers at work and it is pretty great.
https://www.intel.com/content/www/u...data-center-ssds/optane-dc-p4800x-series.html
We also have a Intel DC P4600 2TB, NVMe PCIe, 3.0x4,3D TLC in one of the servers:
https://www.intel.com/content/www/u...-drives/data-center-ssds/dc-p4600-series.html
Also a great drive.
NFS is going to be doing sync write and that is what is going to call for the SLOG device, otherwise the write will be slow.
You can read more about SLOG at these links if you want to:

The ZFS ZIL and SLOG Demystified
http://www.freenas.org/blog/zfs-zil-and-slog-demystified/

Testing the benefits of SLOG using a RAM disk! (specific post)
https://forums.freenas.org/index.ph...s-of-slog-using-a-ram-disk.56561/#post-396630

Testing the benefits of SLOG (whole thread)
https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561

SLOG benchmarking and finding the best SLOG
https://forums.freenas.org/index.ph...-and-finding-the-best-slog.63521/#post-454773
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

bmoreitdan

Dabbler
Joined
Oct 16, 2018
Messages
30
The reason for the suggestion is for greater IOPS which is important both for the VMs and for the database. You should use as many disks as possible, potentially even adding an expansion shelf. More disks is roughly equivalent to more IOPS up to the limit of the SAS controller. You still want to have them in mirrored pairs to protect against disk failure, but with ten drive bays, you could have 5 mirrored pairs, each mirror set is a

For the slog, what I did with another server (hosting o that operating in a different datacenter which is working very well (not performance issues detected at least) is to use a combination of a PCIe NVMe card (https://www.amazon.com/StarTech-com-PEX4M2E1-M-2-Adapter-Profile/dp/B01FU9JS94) which accepts a PCIe NVMe SSD (https://www.microcenter.com/product...-nvme-3-x4-m2-2280-internal-solid-state-drive). What are your thoughts on that? The alternative might be something like this? (https://www.microcenter.com/product...nand-nvme-hhhl-aic-internal-solid-state-drive).
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Those are good quality drives. Should be fine.
These Optane drives perform well. Several users on the forum have tried them with success. The only concern is that they don't have power loss protection of the type that is in the DC drives. Still, that is only a risk if there is an unintended shutdown of the server, any kind of crash, not just a power failure. No crash, no problem, and even if there is a crash, it may not cause a fault. It is just a risk.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Ender117

Patron
Joined
Aug 20, 2018
Messages
219
You appear to trying to save some bucks on SLOG, just don't. They will likely be your bottleneck next to network and get hammered by lots of writes. Get a fast, reliable DC SSD with lots of endurance.

PS: CPU frequency have big impact on sync write performance, while more cores do not necessarily help in Freenas. A pair of 2637 (v2) might be better for your need.
This may also worth a look: https://forums.freenas.org/index.ph...nd-improve-slog-sync-write-performance.70533/


Above assume you care about write IOPS, if your workload is not write intensive, this may not be as relevant.
 
Status
Not open for further replies.
Top