Need advice on my new beast

Status
Not open for further replies.

oni.kage

Cadet
Joined
Feb 16, 2018
Messages
7
Hi Folks,

My current NAS is about at capacity and I am hitting major performance walls due to the motherboard RAM limitation and desktop-grade hard drives. I'm planning a large/expensive build and I was hoping you could steer me towards the best decisions to get the most performance out of my new system.

On order are 26x 8TB HGST Ultrastar He10 SAS hard drives (0F27356). My plan is to use 24 of them in the zpool and keep the other two on hand for cold spares. What RAID grouping do you recommend? I was considering two RAID-Z2 vdevs of 12 drives. Perhaps a better option would be 3 RAID-Z2 vdevs of 8 drives? Which would give better performance? I do care about data protection, but performance and capacity is a bit more important to me for this build.

I like working with Supermicro hardware and I am comfortable around it. I was thinking about buying a barebones 6048R-E1CR24L. It's a 24 bay chassis with two 2.5" supplementary bays and a Broadcom 3008 (part number AOC-S3008L-L8e) flashed in IT mode. What are your thoughts on the backplane and HBA of this kit? Should I be using multiple HBAs, or, something higher-end like the 3200 series?

I was thinking about doing 256GB of RAM. I hope that's enough? For CPUs, I'll use whatever lowish-end Xeon v4s I can get a good buy on.

For ZIL, I'll probably spring for a PCIe SSD like the Intel DC P3700. Thoughts here? I also have two 2.5" 1TB SSDs on hand already that I will probably stripe for the L2ARC. The OS I will install on a SATA DOM or a fixed internal SSD.

Thanks for any advice you can offer.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Perhaps a better option would be 3 RAID-Z2 vdevs of 8 drives?
Absolutely better if you want it fast, in fact 4 vdevs of 6 drives would be even faster. Speed is related to vdev count.
I like working with Supermicro hardware and I am comfortable around it. I was thinking about buying a barebones 6048R-E1CR24L. It's a 24 bay chassis with two 2.5" supplementary bays and a Broadcom 3008 (part number AOC-S3008L-L8e) flashed in IT mode. What are your thoughts on the backplane and HBA of this kit? Should I be using multiple HBAs, or, something higher-end like the 3200 series?
I like it.
I was thinking about doing 256GB of RAM. I hope that's enough? For CPUs, I'll use whatever lowish-end Xeon v4s I can get a good buy on.
The memory should be fine, depending on what you want to use the system for. I would go with a low core count CPU (or pair of them) that has a higher clock speed. The cheapest Xeon is around 1.7GHz and I wouldn't wish that on my worst enemy. It does negatively affect many things that are single threaded. I am in the process of procuring a new system at work and it will have dual 4 core Xeons which will (with hyperthreading) give me 16 threads but it is at about 3.5GHz which should make it much more responsive than the system we got last year that is running 32 threads at 2.4GHz.
For ZIL, I'll probably spring for a PCIe SSD like the Intel DC P3700. Thoughts here? I also have two 2.5" 1TB SSDs on hand already that I will probably stripe for the L2ARC. The OS I will install on a SATA DOM or a fixed internal SSD.
This makes it sound like you are looking to do some iSCSI block storage...

Can you give some guidance about how you plan to use the system? Without knowing for sure, I am guessing, but it looks decent.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
24x 8TB drives? That's a lot of pr0n. :)

Are you doing block storage (like a VM filestore), or file storage? If the latter, you don't need a SLOG (every pool has a ZIL... SLOG is when you move it off-pool). If the former, you shouldn't be consider RAID-Zanything... the proper answer is striped mirrors - 2-way or 3-way depending on your level of paranoia.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Are you doing block storage (like a VM filestore), or file storage?
That threw me off also.
Thanks for any advice you can offer.
We really need to know how you are planning to use this because you have some options here that don't go together.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
SATA DOM devices are usually cheap and not durable.
That's not true and you should be careful throwing out dubious information like that to new users.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
SATA DOM devices are usually cheap and not durable.
I've not observed this to be the case, and they certainly aren't cheap--they're typically considerably more expensive than a standalone SATA SSD.
 

oni.kage

Cadet
Joined
Feb 16, 2018
Messages
7
Sorry for the confusion. I am not doing any block. The NAS will be 85% NFS and 15% SMB. It's a heavy random IO workload. At any given time I will have 10-15 VMs running, 50-100 Linux ISOs seeding, files unRARing, 20 Plex users streaming, and backups to cloud running. My current NAS will hit disk IO bottlenecks long before network bottlenecks (10Gb LAN with 1Gb WAN) and everything just slows to a crawl. My music streaming over SMB will start stuttering. It's annoying and I want to take the sledgehammer approach to solving my performance problems. :)
Absolutely better if you want it fast, in fact 4 vdevs of 6 drives would be even faster. Speed is related to vdev count.
That's interesting! I figured striping across more disks would be faster than striping across more vdevs. I will follow your suggestion.
I am in the process of procuring a new system at work and it will have dual 4 core Xeons which will (with hyperthreading) give me 16 threads but it is at about 3.5GHz
I know which CPUs you are referring to and we buy those like crazy at work as well. So much of our software is single-threaded and I just *facepalm* when folks buy, like, 14 cores @ 2.0 Ghz. I will most likely get those frequency-optimized Xeons, even though I think they are a bit pricey.

The reason I was looking at a SLOG is that I export my NFS shares with sync. Maybe I am old fashion, but, I am a bit uncomfortable using async. I've observed that sync NFS without a SLOG gives you really bad performance. Maybe I'm missing something?
I have been told that the Optane 900P is a better choice
Looks good to me!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That's not true and you should be careful throwing out dubious information like that to new users.
It has been a long time since I tried one but I didn't find it any more reliable than a USB memory stick. It is my experience and they are expensive enough that I am not giving them a second chance. For my money I would rather just use a regular SSD, but if you are space constrained, I suppose you don't have that option.
 

loch_nas

Explorer
Joined
Jun 13, 2015
Messages
79
Another advantage of SATA-DOMs is that one can save on PSU power connectors. But I agree with @Chris Moore that normal SSDs have better value for money. Even if space is a concern, with double-sided adhesive tape it's no problem to put a 2,5" SSD anywhere in a chassis.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I've not observed this to be the case, and they certainly aren't cheap--they're typically considerably more expensive than a standalone SATA SSD.
Not cheap in the sense of being inexpensive, cheap in the sense of not being well made. Maybe I just got a dud, but the one I bought was not much better than a USB stick and didn't last long enough for me to feel it was worth the money. If I am spending my money, I would rather buy a used SSD than buy a new SATA DOM. Even the new server I am working through the procurement process at work is going to have mirrored SSDs for the boot volume instead of using a mirrored pair of SATA DOMs. As little wear as those SSDs will get, they will probably still be good for another 6 years when the server is decommissioned and that will help me sleep at night.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Maybe I just got a dud, but the one I bought was not much better than a USB stick and didn't last long enough for me to feel it was worth the money.

The endurance of modern SSD's make them the obvious and economical choice now days. Having said that I wouldn't discount a SATA DOM based on your past experience. Like you said, your choice in SATA DOM was likely the culprit with your poor experience.

The Supermicro offerings are neither cheap nor "not well made". I think 17 TBW on a 16GB part should provide solid reliability for a NAS.

https://www.supermicro.com/datasheet/datasheet_SuperDOM.pdf
 

oni.kage

Cadet
Joined
Feb 16, 2018
Messages
7
Time for OP to deliver:


l8NOHm9.jpg

Shipment of new hard drives undergoing inspection

L3v2CV3.jpg

Dual E5-2637 v4, 256 GB RAM, Broadcom 3008, Optane 900P SLOG, Intel X520-DA2 NIC

A4gg9bb.jpg

2x960GB SSDs in rear hotswaps for L2ARC. Not pictured is 120GB OS SSD stuck to side of PSU bay with mounting tape.

XE22fFa.jpg
 
Status
Not open for further replies.
Top