Hi again
I did post an update, but must have accidentally scrolled back, so that the page refreshed and everything I wrote was gone. After spending an awful lot of time on testing and playing around I didn't have the will to write it all again, so here I am with another update.
Just imagine I wrote this the very same evening. After the successful flashing of the HBA I did decide to push my luck - and I'm glad I did.
So, from the top:
Up and until recently I was under the impression, that my HBA was indeed flashed into IT-mode. I was wondering why it kept saying "MegaRAID" and presenting the drives as being individual JBOD(s) - but, what do I know?! I'm a total n00b at this and just another guy with to much time and to much interest ind fiddling around and not much sense.
However. I decided, for some reason - I was tidying up, let's go with that - to give it another go. Gathered the files from the guide but also read the comments to find clues as to why I was unsuccesful in flashing the damn thing in the first place.
Long story short - I managed to get it to work, by following the guide, as I had done before, but I must have done something different.
Now the loading/HBA/BIOS-thingie looks like this instead:
Soooo much more in line with what I expected - only tried one drive to begin with. But when I confirmed it worked - drive detected - I pushed on an started filling in the backplanes:
(This image is from later in the process)
Oh. And I should mention, at some point, you'll notice I switched power supply, so don't get confused. It's called
foreshadowing *uuhh hhuuu uhh*.
Of course, booted into Windows and wanted to see each drive detected live -
hotswap, and all that. 22 drives...
(In case someone was wondering: It's a iKVM/HTML-thingie screenshot of Windows running on the SSD in the m.2 slot on the motherboard)
Too many drives to fit into one screenshot in the current resolution. How many, you ask...?
(Ignore the very obvious "64" number, which is... obviously, erhm, wrong)
68 drives!
I ran the numbers, and 68 drives is indeed more than the previous 58 drives. Actually, it's a 10 drive difference (or 2, that depends on how you count).
Trust me. I'm a math teacher. I should know how addition and subtraction works - but that's about it.
On the previous image (the one with all the drives on the backplanes) you can see 87 drives, I think.
Here is an image from DISKPART.
(Again, the number of drives don't match. Please ignore the numbers - not all of them, of course. Only the wrong ones)
Glourious! There is no other word.
Decided that this was as good a time as ever to do some tests. With +58 drives, and the BIOS/HBA-boot thingie looking right, I thought to myself, that it must work.
+80 drives was more than enough to do some testing. So without any other prep, I used Windows' Storage Spaces to create 'something' (I honestly can't remember, so bear with me) and ran a CrystalDiskMark:
I think I ran into some weird device limit. I clearly remember marking all the drives, but it refused to use them all (and this was after a tedious reset of all drives that I had previously used. I get, that you want the ability to repair a damaged array - I gues Windows thought I was a damaged array - from the drives I had previously used, which is fine. But I digress), and I think it was around 40 drives however I went about it.
40 x 128 GB ~ 5 TB
So I must have used some sort of parity, because the "JBOD"-option only gives a "single-drive" performance.
I know, 2 GBps / 1.7 GBps R/W aren't that impressive compared to newer drives. But... it's still kinda awesome!
Fast? Yes, but not the fastest. Far from it.
Practical? No, absolutely not!
Usable? To a degree
Faster than 10GbE? I would think so, yes?
Economical? Get out!
Storage Spaces - can't remember the excact config.
2-disk stripe.
4-disk stripe.
2-disk 'combined'.
4-disk 'combined'.
2-disk mirror.
... soo. About that power supply...
You'll notice, that on this image there is two (or three) things going on:
- Power supply was changed from a 750W (older) to a 850W (newer).
- Fewer drives (72 to be excact)
- Two SATA-drives on the side (TrueNAS install)
I managed to run almost everything in Windows just fine. It was when I booted into TrueNAS I started having issues (not TrueNAS' fault, I'd might add).
I decided to go 'gung ho' and create one large vdev (87 drives).
Ticked all drives, added them to the pool, and TrueNAS began to initialize... aaaand.... *click!* and then nothing.
The computer just tapped out. Nothing.
So. What I suppose happened was, that I was running everything from the wall and sequentially adding drives. Didn't power off when 'soft rebooting'. I guess, the drives must have been 'filled' and charged, but when I started initializing they went in overdrive.
I've read somewhere that SATA-drives can pull upto 10-12 W when booting or in write heavy scenarios. That's the reason I added the drives sequentially. But, not that they'd draw a full 10-12W (or so I assume) when initializing.
Let's just imagine, that that's what they did. 87 drives pulled at least 10 W each.
Well. The math is easy on that one (good for me): 87 drives x 10 W / drive = 870 Watts.
The power supply was pulled from a pile of discarded workstation/server gear some years back. I haven't had any problems with it, but I suspect that +800 W was a bit too much for an old 750W (total!) PSU.
Had to remove drives, so that I could boot.
I'll spare you the details, but I'll admit this isn't the way to do it.
All four backplanes are connected via one (1!) single molex connector. The PSU is modular and I have only one cable with four molex-connectors. So everything is going through these wires... oh. What wires?
Well. You see the top connector? That's a cable with 5 SATA power connectors.
The large one on the bottom is the ATX-cable.
The cable with molex connectors is the one in the middle. Not the one in the rear. The one, with four little wires - that get's hot. Not dangerously - I should know, I also teach physics - but not in the good way.
The current protection (I assume) kicks in with only 24 drives connected.
Four backplanes with SAS-expanders and drives is simply too much for that one connection.
On the 750 W PSU I had two cables to spread the load.
So. Where am I at now?
Well. Tbh, the first post (the one, that got deleted) was me declaring the project - not dead - but on hiatus. I simply can't justify the cost of having the system running 24/7. Even at idle, we're talking 170W (I'll explain this number later).
I ran the numbers through a electricity calculator and it would total to around $700/yr for idle alone.
I don't know how well that number translates to people on this forum, so I've come up with some comparisons (for what around $700 would get you in Denmark):
- SteamDeck 256 GB, IPS or ROG Ally Z1 Extreme
- Refurbed M2 MacMini, base model
- 2x DJI Pocket 2
- 7x Disney Plus yearly subscribtions.
And that's for idle power alone.
At full tilt we're talking +700 W (I'll explain this as well), which is around $2,900 - in 'DK-prices':
- Mac Studio, base model
- DJI Mavic 3 Pro Fly More Combo + DJI RC
- Blackmagic Pocket Cinema Camera 6K Pro (if you discount/subtract the added value of DaVinci Resolve Studio)
So. Practicality went out the window long ago. Just wanted to see, if it was possible and what results I'd might get. If it was "worth" it.
Well. Sorry to say. I have some more math.
Since I started this project some newer drives hit the marked - not much of a surprise, I'll admit, but let me explain.
24x 128 GB = 3,072 GB or around ~3TB. In a vdev with RaidZ1 or RaidZ2, that's about what you'll get. To make things easier, let's just assume around 3 TB pr. backplane.
4x 3TB = 12 TB total.
That's more or less the same as four 4 TB drives in RaidZ1.
I have an ASUS Hyper M.2 x 16 card V2 - to those who might not know: it's PCI-e card, that allows four individual m.2 form factor drives to be placed in a PCIe x16 slot. The motherboard must support bifurcation which basically means that the x16 lanes are split into individual x4 lanes, denoted as x4x4x4x4.
A decent 4 TB drive cost around DKK 2,000 (some are cheaper of course, but let's go with middle of the road, MVP'ish).
Four of those is around DKK 8,000 or $1,200 - (in DK that'll get you a SteamDeck 1TB, OLED with a nice case and sceen protector).
Only four drives, waaaaay less power usage, better performance. And, if I wanted to, I could add SATA-drives via the motherboard headers and expand with three or four large hard drives and use those as long term storage and only use the SSDs for current projects.
Is CraszyNAS dead on arrival?
No. Absolutely not. I'll see this through and buy cables just to see, what I can do with it and what kind of performance I can eeke out (please do suggest some RAID/vdev/pool-layouts). But, also, I have to admit it is a kinda 'Spruce Goose'-thing.
Before I'll sign of for now - I'll tease you with the last screenshot, before the thing powered off (with the 850 W power suplly).
... "45Drives"? That's cute.
How about 98 drives?

Oh. One last thing. About power usage. How did I get those numbers, when the thing powers of with all four backplanes filled?
Well. Did some tests with one and two backplanes filled (and a meter at the socket). I know it doesn't scale precisely linearly, but with two backplanes filled I could use those as points to calculate base power usage (checked against the system running with only the backplanes and no drives except boot-drives) and what four filled backplanes would consume. It's also how I got to the "10W/drive"-figure and confirmed with other numbers around the interwebs.
I'm pretty sure, a powered on and idle CrazyNAS consumes around 170 W idle (best case), while a fully loaded CrazyNAS consumes more than 700 W. Therefore I consider +700 an absolutely lowest "worst case". I do get, that the previous number "10 W/drive" would equate to: 96 x 10W = 960 Watts for the drives alone. But I couldn't get a precise number pr. drive, since they are different models (different number of chips and model of controller) and I'm pretty sure they're not all fully loaded except for power on and initializing. I have to buy some molex cables, to be able to test the latter. I can get to 96 drives, but the 850 W switches of after 10-15 second, even at idle.
I could try with dual power supplys, but I'm not sure what would happen if OCP trips on one of them. Though I am adventurous, I'm not prepared to sacrifice four backplanes, +90 SSDs, HBA, motherboard and 128 GB RAM - sorry.