Recomended Configuration

MS2021

Cadet
Joined
Feb 27, 2022
Messages
4
I've never really used any kind of network-attached storage, its has never been something I've needed, but as my little shed has grown up, it's slowly become relevant.

I did the research or at least I think i have we are about to find out, what I'm after is how would you configure it

Hardware
36 Bay Super Micro, sporting a X9DRi-LN4F+ Motherboard, 2 x E5 2660 8c/16 Threads, and fed with 24 x 4GB for 96GB RAM, and finally a DUAL 10 Gig Nic

Network
48 Port Catalyst 10 Gig Switch. Supports just about everything

Drives
6 x SSD Crucial Write Intensive drives
8 x 10 TB Iron Wolf
8 x 1TB Toshiba Enterprise Jobs
8 x 2TB Seagate something garbage (they was cheap as chips)

Controllers
Dual LSI Controllers. 1 for the front 24, 1 for the rear 12, i don't have the models to hand but they are flashed to the IT mode

Its gonna run a mixed workload, and i need different performances for different things, its got to feed 6 x R320 ESXi servers, running all kinds of things, for example

Plex - don't need anything like fast disks, just a lot of space
TFS (Leave me alone) don't need a lot of space but high performance is preferable
ESXi running Linux and Windows servers, a half-decent performance and space size is preferable

Everything in my house is hooked up to the network, all the sockets the lights you name it, the servers at this point run the house, i no longer want to run the risk of losing a server and literally crashing the house, so if you could help me out with a how you would configure it disk wise etc, should i virtualize the install and pass the disks thru

And finally, all the hosts have a 10 gig adapter, and the disks will be presented to ESXi over ISCSI

Thanks
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I assume the Tosh 1TB are SSD, if not throw them away - not worth the power to run them. I am not entirely convinced by the 2TB "gargage" drives - but they can be a scratch pool for playing with. Maybe replicated snapshots from the SSD's

For bulk (plex) 8*10TB in RAIDZ2/3 & 8*2TB in RAIDZ2/3. You could add the Seagates as a second vdev to the main bulk pool, but as you describe them as garbage - then don't

For Performance (1). 6 * Crucial in 3 mirrored vdevs
For performance (2). 8 * Toshiba in 4 mirrored vdevs if they are SSD. Again you could make them 1 pool with the Crucials as they should all be good disks.

Purchase at least one optane 900P, U.2 or PCIe (preferably 2) and run them as mirrored SLOG's on the SSD Pool(s). Its very possible to split an optane into multiple partitions and SLOG more than one pool. Run the SSD Pools as sync=enabled

Preferably run two nics on the NAS and ESX hosts. 1 for iSCSI and 1 for the rest (I even have a dedicated 10Gb switch for iSCSi)

There is my 2p's worth
 
Last edited:

MS2021

Cadet
Joined
Feb 27, 2022
Messages
4
My apologies, the Toshiba drives are indeed SSDs, they've been in the cupboard for a good few years, don't think i ever got around to using them.

The Seagate garbage as I put it was cheap there all brand new 7200rpm 64mb cache SATA Drives, I think i paid £17 each for them before i decided to go this route, they was gonna be slapped in a cheap chassis and use for long term archive type storage :)

Never use optain, but i do have a Gen 3 NVMe adapter for 2 NVMe disks, i don't have any disks but they seem cheap enough, any suggestion as to size them?

And a no doubt stupid question, , why so many vDEV's what are the advantages to that, i was going to just assign all the SSD's to one, the iron wolfs to another seagates to a 3rd etc.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
More vdevs = better performance.
Each vdev contributes 1 single disk's worth of IOPS.
VMWare needs IOPS. Plex does not
Therefore Plex on RAIDZ = 1 disk worth of IOPS & VMWare on Multiple mirrors = lots of IOPS

"Never use optain, but i do have a Gen 3 NVMe adapter for 2 NVMe disks, i don't have any disks but they seem cheap enough, any suggestion as to size them?"
I could take that statement multiple different ways. However I said Optane for a reason. The 900p comes in two flavours. U.2 and a single PCIe x4 card. a Gen3 NVMe adapter will not do (and neither will any likley SSD it holds). Optane does not come in M.2 (at least the affordable, useful ones don't). You will need either 2 PCIe cards or 2 PCIe to U.2 or a dual U.2 card (and a motherboard that supports bifurcation). You seem to have plenty of lanes available with 2 CPU's.
Example 1
Example 2
PCIe Optane 900p
U.2 Optane 900p
Dual U.2 adapter (Cabled) - but watch the cable types

Your maximum iSCSI throughput is likely to be 10Gb? A SLOG needs to hold 5 seconds of maximum throughput. 10Gb = 1.25GB. 5 Seconds = 6.25GB. An Optane 900p is 280GB. I use 20GB partitions for safety - the rest is wasted (or if you are feeling very energetic you could overprovision the Optane down to 50GB). Idealy these are mirrored.

BTW - the reason for Optane is:
1. Very fast, even in comparison to normal SSD / NVMe drives
2. Very low latency, even at queue depth 1
3. Retains data when it loses power (vital). The D4800X is better here - but expensive: Sample 4800X
4. Massive write endurance as everything that is written to the iSCSI pool is written to the SLOG. The SLOG is never read from (in a steady state). Decent Optanes have a write endurance that is an order of magnitude better than "normal" SSD's

Using a standard SSD will not improve things much (as its no faster than the SSD's its in front of) and it will wear out fairly rapidly and likely won't have any form of PLP. DC drives will, but will be no faster than the pool they are in front of.

Urgent - if those Seagates are SMR (shingled) then really throw them away (or put them somewhere else) - do not use them for TrueNAS / ZFS. Check this very carefully!!!

Why use a SLOG? Some resources below
SLOG Benchmarking - and finding the best SLOG
A bit about SSD perfomance and Optane SSDs, when you're planning your next SSD....
ZIL & SLOG - written by @jgreco who really knows what he is talking about (as do others BTW)
NFS and iSCSI likes to use sync writes. This means that data written to the NAS has to be written to permanent storage (not memory). to quote @Arwen Note that the ZIL is 100% in pool on data disks, not in RAM. SLOG changes that to a separate device, though attached to the pool. As for how much space a ZIL needs off the data disks, not very much. A few gigabytes or so. But it's not permanent storage. Writes to the ZIL are initially built in RAM, (of course), as a ZFS transaction. However, until that transaction group is fully written, the synchronous write does not return as complete, (unless "sync=disabled"). The RAM entries stay in place until the regular data location can be written. When that is done, then RAM freed and the ZIL entry for that transaction group is also freed. Thus, the concept that a ZIL, (and SLOG), are write only unless ungraceful shutdown.
You can use async writes which are faster but put your data at risk. More importantly they put your entire VMDK at risk in the event of a power outage
 
Last edited:

MS2021

Cadet
Joined
Feb 27, 2022
Messages
4
Thanks for the reply's there actually cheaper than I assumed as well, will spend a bit more time looking at them, and some more time with experimentation clearly I have a lot to learn on the storage front.

Although no problem with SMR, the bloody drives, i read about the kerfuffle with FreeNAS a while back but given some of my workloads i had already come across them and their DISMALL performance, try run a code repository one with multiple users pushing and pulling, they are and awefull technology.

I don't expect any workload at anytime to need a full 10 gig of thruput, its just in this instance I'm wired for 10 gig in the shed, my broadband upgrade forced me into it.

If anyone else has thoughts please feel free, i value the advice and the reading material :) I should have payed more attention to storage over the years
 

MS2021

Cadet
Joined
Feb 27, 2022
Messages
4
I thought id offer an update after several days of getting confused and the rest of it.

I scrapped this do it myself approach, will sell off alot of my excess kit and such and just order a propper solution online, looking at costs os SAN and FC these days, they really aint that bad. And i have enough old kit here to more than cover it.

Thanks for the help though folks
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
IX Systems do good kit - but its hard to get in the UK
If buying an off the shelf setup - be very careful that:
1. Intel / Chelsio Hardware for the NIC
2. Use an HBA, not a RAID card
 
Top