i don't see many cases for a larg amount of hot swap 3.5"

cdoublejj

Cadet
Joined
Apr 6, 2022
Messages
2
i've found one norco case if doesn't dissapear before i can afford it. it has 24 bays in thee front. i figured some seasoned vets here could suggest something or maybe talk me about of larger arrays with used drives over a few large TB drives. cost is what got me in to unraid and spending is what got me in to my faster but not super fast all SSD unraid array.

also can i re use my SSDs in smalelr arrays for i-scuzzy for some Steam games?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The Norco cases are well known for turning into EZ-bake ovens when people try to "make them quiet" and aren't that good even when they don't.

5U options for 48 or more drives have been around for years, possibly starting with the AIC RMC5D2, popularized by the 45Drives Storinator series, ultimately working towards the disk-shelf-JBOD 4U 90 drive Supermicro SC946ED-R2KJBOD.

In general, ZFS does not benefit as much from having more drives as it does from having larger drives, more RAM, and some L2ARC for cache, though, so if I were looking for a 100TB array, I'd probably be more interested in 8 x 14TB HDD with 256GB RAM and 1TB L2ARC than I would in a 90 drive JBOD with lots of 1TB HDD's. There will be some places where the 90x1TB win out, but also lots more to go wrong.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
In general, ZFS does not benefit as much from having more drives [...]
Thanks for setting me straight. I was still recounting the "more spindles - more IOPS" mantra in my head.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks for setting me straight. I was still recounting the "more spindles - more IOPS" mantra in my head.

Smartass. You know it's complicated. It will work out in different ways depending on different specifics. However, the differential between the 8 drive solution I suggested and the 90 drive shelf is only about a factor of 11x more IOPS. An 8xHDD chassis on eBay is only $200 and 8x 14TB shuckables might only bring that to around $2500, so adding 256GB RAM and 1TB L2ARC can probably net you ~100TB of raw storage for about $3000-$3500 done my way. The read IOPS of stuff coming out of L2ARC or ARC will blow away the mere 11x multiplier of the 90 drive shelf.

The 90 drive shelf, on the other hand, is $7,500 -- by itself -- with no server -- with no drives -- and even if you go for the cheapest $200 server, how much is 90 1TB drives going to cost? Even at just $20/drive that works out to $1800 for drives. The cost per TB of the raw storage on the shucked drives is a dominating factor.

And that's the other thing. If you quadruple the amount of free space on a pool, for write speeds, that has a huge improvement on write speeds. So you can maybe get 4x1TB HDD = 800 IOPS (generous) read/write IOPS or a 1x12TB HDD with 150 IOPS, but for the same amount of space used (let's say 3TB), the 12TB drive will only be at 1/4th capacity, while the 4x1TB will be at about 75% capacity. So then we consider what the steady state throughput's likely to look like.

delphix-steady-state.png


But we have to allow for the fact that the 4x1TB's have about 3-4x the IOPS capacity. From my reading of "the graph" this suggests that the 4 drives may offer about the same write IOPS (750 KB/sec * 4 ~= 3000 KB/sec) as the 12TB (2500 KB/sec * 1). This of course assumes a model where you probably have more than the 8x14TB HDD's I was talking about, but the point is that you can probably still do this a lot more cheaply than the brute force "craptons of drives to get an order of magnitude more IOPS" methodology.

The other thing is that the only way to get $20 1TB drives is probably on the used market, while shuckables will be new drives.

So I don't know which kind of sarcastic you were being, but I see ways to squeeze value.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
So I don't know which kind of sarcastic you were being, but I see ways to squeeze value.
Not at all. This was one of the "truths" I just filed in my brain in the last decades. Slightly adjusted to "more vdevs - more IOPS". In practical terms many small drives are a waste of rack space and energy today. Save in electricity bills what you spend on SSDs. Now that I think of it. :wink: Seriously I would not consider intentionally smaller drives but still go for the largest reasonable number of vdevs in pool layout if IOPS are important.

The old rule of thumb (as you sure know) is from a time when "slow" SATA 3.5" were twice the capacity of "fast" 15k 2.5" SAS and SSDs were nowhere to be seen. So one crammed as much of those small drives as possible into a chassis for VM storage.

And again it's "pick any two". Capacity, resiliency to failure, IOPS. Can't optimize for all if the budget is the fixed value.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
The Norco cases are well known for turning into EZ-bake ovens when people try to "make them quiet" and aren't that good even when they don't.
I followed @Stux to build one of those and am happy with it, having noted the discovery that the Noctua consumer fans were a failure to cool and needed to read through the thread to where he changed them for the enterprise ones.


I find it to be great (when paired with my adaptation of the fan script).
 

cdoublejj

Cadet
Joined
Apr 6, 2022
Messages
2
jeeze. i like that norco had 24 bays which should be PLENTY. any thing that does require a few grand and sexual acts in a back alley? so the noroco has restricted exhaust eh?

i'm tired of pulling my corsair out to take off both dang sides and rooting aorund for the right sata/sas connector to swap my crappy drives. i have decen 3ghz 8 core EPYC with asrock rack mobo with BMC and noctua tower cooler. a bit light on ram and cheap brand at that though i 'd like to get a real kit like kingston with more ram. especially if it will help ZFS.

right now i have 10 or so various SSDs in a semi janky unraid. i guess i could get a tower with ton of 5.25" bays like the old lian li towers and put 5.25" hot swaps in for an arm and leg.

that norco wouldn't benefit from some finger chopping thick boy fans and some home made air ducts and added squairel cages fans for exhaust with some custom speed holes throughout eh?
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
I have 1 Norco 4224 and 1 Norco 4220. Both are used as JBOD enclosures with Supermicro JBOD boards. I also have put Noctua fans in the cases. So far, the drives stay cool enough for my needs. My garage gets hot in the summer but nothing gets to dangerous temps. The noise level is way too loud to have near your workstation area. That is why I keep my server rack in a closet in my garage. Also, my Supermicro server case is a bit loud as I put the active 2U heatsinks on the processors.

It seems that Norco cases are hard to find. Another great option is UnixSurplus.com. Buy a used Supermicro case from them complete with a good server and be done with it. Just remember that Supermicro cases with 3.5" drive bays will not accept 2.5" drives without tray inserts which can get expensive depending on where you get them and what kind you use.

Cheers,
 
Top