Opinions on high performance system

Status
Not open for further replies.

Nigel Dunning

Dabbler
Joined
May 1, 2015
Messages
12
Hi

I've built a few FreeNAS systems, but most of them were for nearline or archive purposes where their only requirement was receiving a bunch of rsyncs into them, so just loads of JBODs with 9 disk vdevs, this is the first I am building where the use case is high performance tier 1 storage.

I'd like some peoples opinions as I've received conflicting advice from what I read in here.

My initial plan is a Supermicro server, 24 bay 2.5inch drives, LSI 9207-8e HBA, 60 bay JBOD, 1TB of RAM, 50 x 8TB SAS drives in mirrored vdevs for 200TB of usable space.
The server will be the storage appliance for large particle/fluid/fire rendered simulations, which are very heavy writes, but still a lot of reading of assets before the simulation runs.
With this configuration and the large amount of ARC, should I even have a zil/slog ? I was thinking of getting some SSD's and only adding the zil/slog later if I see that they would be used.

Today I was given alternative advice from a vendor, to get 512GB of RAM, and 2 NVMe PCI FLash cards, use them for zil/slog, however his advice was to partition them off into smaller partitions and create several slog/zil
I've personally never heard of this, and it sounds like bad advice to me.

Even without the partitioning advice would I get any benefit from using less ram, and having a Flash zil/slog ?


Thanks
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
however his advice was to partition them off into smaller partitions and create several slog/zil
No!
I've personally never heard of this, and it sounds like bad advice to me.
Good call, that's because it is.
Even without the partitioning advice would I get any benefit from using less ram, and having a Flash zil/slog ?
More RAM is always better. SLOG has nothing to do with RAM (performance-wise) and it's only going to be useful for sync writes. What kind of share will you be using?
 
Status
Not open for further replies.
Top