recommendation on my current hardware

Guentha

Cadet
Joined
Mar 3, 2024
Messages
6
first truenas build. I've been using hyper-converged storage spaces (S2D) with a cluster for years but it crashes about 2 times a year and the tools to recover are few and far between and knowledge is also space so it's a pucker moment every time the pool goes offline.

here is my new build

dell R820 4x e5-4657L
768GB Ram
4x 56GB Infiniband connections (2 Switches for redundancy)
LSI 9300-8i (boot, SLOG, and Metadata)
LSI 9305-16E (cache and Spinning rust)
HGST 4U60 G2 DAS

for drives I have basically unlimited 12TB exos drives but I only need about 30TB of space(don't want to pay for power if I am not using them). I have a second server with a little over 100TB available I will be using as a backup so I am shooting for around 100TB bulk storage. Currently using 24 drives in 11 mirrored vdevs with 2 spares.

For Slog I have tons of 200gb Samsung industrial drives with around 100K iops but it looks like I can only mirror or stripe (can I do a Mirrored Stripe, like 4 drives in 2 mirrors for more speed?) Currently, 2 disks, mirrored for Slog. Doing the same for metadata (not sure if I need to offload it or not but I have the drives.)

For L2Arc I have 20 1.6TB HGST Enterprise drives striped giving me a little over 28TB of cache on top of my 700GB+ ram. Basically, as it sits all my data can fit in cache even though only about 100GB is hot data.

Any thoughts before I put this into production? Just playing now. I am down to change out some things but I don't want to spend a fortune. All this hardware comes to me free from a local college so I am really in it only about $1K right now for cables and HBA's and such.
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
768GB RAM + SLOG + 28TB L2ARC sounds a little bit over the top, but looks like a fun server - though not really power saving one.

Drives= HDDs, or are some of your listed SSDs?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Do you actually need a SLOG (any sync writes?) and L2ARC (0.75 TB RAM)?
Do you understand what they do, and what are the requirements?
 

Guentha

Cadet
Joined
Mar 3, 2024
Messages
6
768GB RAM + SLOG + 28TB L2ARC sounds a little bit over the top, but looks like a fun server - though not really power saving one.

Drives= HDDs, or are some of your listed SSDs?
I know its over the top....turns out I am a bit over the top. The thing is its free hardware. they also gave me an Infinabox rack with 2 controllers and 3.5PB worth of spinning rust. I could just fire that up....but it's also overkill and less fun. This server is part necessity as I am tired of worrying about S2D failing and part education. I work as a tech consultant and I like to get my hands dirty. most of our clients operate with something like a simple Windows files server or maybe a hybrid NAS/Sharepoint setup. I just like to know all the options and understand how it works by using it myself. busicly and education project.

xnos 12TB are HDD the Samsung SLog Drives are high endurance enterprise SAS SSD and the Cache drives are HGST 1.6TB enterprise SAS SSD
 

Guentha

Cadet
Joined
Mar 3, 2024
Messages
6
Do you actually need a SLOG (any sync writes?) and L2ARC (0.75 TB RAM)?
Do you understand what they do, and what are the requirements?
I don't know. I come from the Windows world of storage. I am exploring my options.

If you are asking for loads, this feeds 4 IBM X3850 X6 virtualization servers running Hyper-V they each have 10-15 VMs but most are not IO intensive. there are a ton of phone servers and they basically just sit unless someone makes a call and even then it's just log writes. This is more for speed and flexibility. If a customer needs something we like to be able to spin up a VM and get them going while we figure out the best action. Or maybe we will build a whole test network and VM clients to test a problem or a solution before buying any hardware. Plus sometimes our guys just play around on it. test ideas.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I don't know. I come from the Windows world of storage. I am exploring my options.
OK, so you have a lot of reading ahead about ZFS.

In short:
A SLOG is NOT a general purpose write cache: It's only used for sync writes (which your VMs may require), and it will hold at most two transaction groups worth of reads, i.e. 10 seconds—beyond that all writes have to go the pool. So requirements are: High endurance, power-loss protection and low write latency—but it may be small. "Tons of" 200 GB drives is NOT helpful here.
If you want maximal write performance, turn off sync writes, you'll be one order of magnitude better than sync writes, even the best Optane DC P5800X as SLOG.

L2ARC is only indicated if the primary cache (ARC, i.e. RAM) is insufficient. But managing a L2ARC requires RAM, which can evict ARC and make performance worse. The recommended L2ARC:RAM ratio is 5 to at most 10.
As you describe your workload, the 100 GB of hot data would reside in RAM and a L2ARC would go unused.
But is a L2ARC would be useful, you'd want something like 3 TB RAM to manage a monster 28 TB L2ARC—may as well go full NVMe and forget about L2ARC.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
*admirable whistle*

That's a lot of big chunky iron there. Can you share the model numbers of your Samsung 200G and HGST 1.6T drives?

Given the relatively small footprint of your existing data and the tiered nature, I'd almost suggest that you're best off making two separate pools - one of pure SSDs made from the 1.6T HGST drives, and then use the HDDs in a pool designed for your "slower tier" of data.
 

Guentha

Cadet
Joined
Mar 3, 2024
Messages
6
A SLOG is NOT a general purpose write cache. "Tons of" 200 GB drives is NOT helpful here
I was not going for more space but more Iops. these Samsung drives(MZ6ER200HAGM) are high endurance but only 100K iops. I was not sure if I could add more drives to speed things up in case the SLOG is needed. I have the drives and the space and power requirements are low so I thought I would toss them in. also learning so having a SLOG will help me determine for future builds if I need one or what drives to use. I could always go NVME but battery-backed NVME are pretty spendy still.


L2ARC is only indicated if the primary cache
From the reading I have been doing this is mostly true but It looks like (correct me if I am wrong) the L2ARC will hold hot data. since all my data can fit in L2ARC I will essentially do 100% of reads from L2ARK leaving the spinners just do writes for the most part wich makes them way more efficient.

Can you share the model numbers of your Samsung 200G and HGST 1.6T drives
Samsung drives MZ6ER200HAGM 200GB
HGST HUSMR1616ASS200 1.6TB
Seagate exos x14 12TB

I'd almost suggest that you're best off making two separate pools
this is how our current setup is and I thought with the way L2ARC worked I could get the best of both worlds. I lose the space of the Flash pool but gain the ability to let the system automatically handle hot data. I get 100TB of space that all feels like SSD. Since so much is cold data it would be nice not to have to add 2 hard drives to each VM.

S2D pitched this approach as well but it never worked, Our Iops tanked.

may as well go full NVMe and forget about L2ARC
Would love to go Full NVME but this is the hardware I was given. The college upgraded to all Flash from Nutanix and I got all their old stuff. technically second oldest stuff as the guy I get it from takes what he wants and I get the rest.
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
If you „only“ need 30TB: take your 20x 1.6TB SSDs. For one fast pool

If you Need more space: striped mirror, 2, or 3 wide, with the HDDs.

I would even consider to have 24/7 and power efficient server

+ one backup/on demand server, which you switch on „on-demand“ where you throw in all the free stuff
 

Guentha

Cadet
Joined
Mar 3, 2024
Messages
6
If you „only“ need 30TB: take your 20x 1.6TB SSDs. For one fast pool

If you Need more space: striped mirror, 2, or 3 wide, with the HDDs.

I would even consider to have 24/7 and power efficient server

+ one backup/on demand server, which you switch on „on-demand“ where you throw in all the free stuff

even at z2 20.1.6tb would only be 25TB. not enough for my current dataset.

My reading has led me to believe that the l2arc should fill itself with hot data and be the only read data since all my data fits inside. l2arc can(must) be striped giving it massive performance. way more than my single 48gb link to my sas card can handle. so wouldn't putting the l2arc in front of my spinning rust a better solution? Am I missing something about l2arc?

I have been playing around and so far my best performance has come from 768gb ram at 1333 vs 512 at 1600. 20 1.6 tb sas SSD in stripe for L2arc and 22 12tb 7200 drives in 11 mirrored vdevs with 2 hot spares for 24 drives total. might move that to 36 drives but I haven't tested If I really need it.

when I mentioned power efficiency in previous posts I should clarify. I don't pay for power in the building that I am in. however I have a weird fight. I have 2 120v 20apm circuits and a 30 amp 240. the problem is electricians keep taping into my circuits and in the winter people put 1500-watt heaters under their desks and blow circuits. sometimes mine. the 240 30 amp is actually for the AC in the server room but they gave me a outlet to steal from it as it doesn't take that much power. So really I am looking for the lowest amperage rather than cost savings.
 

Guentha

Cadet
Joined
Mar 3, 2024
Messages
6
Finally got it all working and it's slower than molasses. My 12-drive raid 6 is kicking its butt.

only using 2 40gb links at the moment but I was expecting to get at least 2GBps out of it. Tested with SMB 700/600 and ISCSI 94/108

I am using the Dragonfish-BETA (I know, not for production) just playing around.

Screenshot 2024-03-20 085720.png
 
Top