Hello from an Algo Developer

Status
Not open for further replies.

ralph5909

Cadet
Joined
Dec 6, 2018
Messages
4
Greetings,
I've recently dove into FreeNAS since it was incredibly easy to manage any kind of storage data, hands down.
I've been playing around with FreeNAS for about a month and have built the following:

In any of my FreeNAS servers I always have a second PSU running just the drives, always on, and connected to a separate AVR UPS. These allow the drives to stay persistently on through the server reboots and shutdowns, and so I can debug the server without constantly turning the drives on/off. It helps to increase the lifespan of the drives. Also had an issue where the motherboard wasn't detecting 15+ drives, and this fixed it.

x5 Z2 striped across a x5 Z2 pool and (with 1 cold spare for any failing drive)
These run iSCSI Network OS drives for all my servers, including the RPI Model 3 B+ Compute Clusters (I'm currently testing iPXE + iSCSI through netboot Windows IoT/Raspbian).
I've used a 'mixture' of 2TB and 3TB drives. (x6 2TB, x4 3TB) and just ate the loss of 4TB of space. Some of these drives have over 40k hours. So I was just ready with a cold spare. (I also don't have enough SATA ports lol).

x4 Z2 8TB Array w/ 256GB m.2 NVMe Cache
Got these tested over USB3.0 and shucked from the enclosure. They're WD Reds Plagued by the 3.3V problem. Easily fixed with a special SATA splitter.
Was going to do a mirrored vdev, but repairing a degraded pool would take at least 30 hours, and maybe more in a production environment. During the re-silver, the extra stress could lead to another failure (These were all bought at the same time). Not going to risk it since everything else in my algorithm refers to this data-set.

The issue I had to deal with was easier expansion vs. having two drives from the same mirror failing. In this case, due to the added redundancy I figured I'd have to drop some cash to add another x4 Z2 set to this pool. I'm hoping to expand to x6 Z3 pools though, just to be semi-paranoid.

x14 Z2 256GB SSD Array
I got these real cheap, but with a 3-year warranty. Ran a few of these drives on AS SSD Bench and got at least 1000. I have around 2.5TB usable of space on this array, which will be used as a cache for casually manipulating data.
Each drive (did some calculation work but can't find the exact numbers) is around 1/4~ the performance of an NVMe drive but at half the cost per GB (maybe more than 50% since I was seeing 1TB for $160~ deals where 256GB was $40~). Anyway SSDs and NVMe's have been competitively priced and it took a little analysis to figure out which was needed in my setup.

My basis for not building an NVMe array is:
Not hot-swappable using *cough* consumer-grade hardware in a production environment.
Harder expansion (SATA only needs AHCI controllers or port multipliers)
Don't need proper cooling for SSD (SATA) drives, as NVMe get extremely hot and then simply throttle. I usually set up a dedicated case fan to cool NVMe drives.
No need for extra PCIe x4 special adapters or expensive hardware.
Didn't need the extra speed. The power of x14 SSD's will easily topple my x2 10Gbit connection.
Just more expensive. SSD has been around for longer.

FreeNAS runs as the OS for the back-end of my algorithm, being the perfect tool to manage storage for my entire dataset.

Feel free to reply! And it's nice to meet the community!
 
Last edited:
Status
Not open for further replies.
Top