DIY all flash/SSD NAS (CrazyNAS issues)

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
154
I looked through your previous thread and given your drives are mostly from old laptops, I'm going to assume they are all SATA. My brain is tickling me about an issue we dealt with years ago around SAS expanders and SATA SSDs, having to do with a limit on the number of SATA drives per channel. We ran into this issue in the old FreeNAS days when trying to fill a 60 bay, high density JBOD with SATA hard drives. I need to go dig through my archives to see if I can find the ticket on this. I suspect your setup would work if you were using SAS drives.

If there is room on your motherboard, I would suggest trying a 2nd HBA and splitting your backplanes between them.
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Remarkable work to try and keep this totally impractical but wonderful build going.

Screenshots show "AVAGO Mega-RAID SAS-MFI BIOS", which does not look like the recommended IT firmware. Could that be the issue?
A pesky RAID controller to be replaced by a plain 9300-8i…

I have no idea to be honest. Bought the item listed as "LSI SAS 9340-8i ServeRAID M1215 12Gbps SAS HBA IT mode for ZFS FreeNAS unRAID".

Should I read your comment as a suggestion to swap out the controller?

For instance: LSI 9300-8I 9300-16I SAS3008 SAS9300-8I IT Mode HBA JBOD PCI-E 3.0 SATA SAS 12Gb

Some are listed as Fujitsu, some as SuperMicro and others as just LSI. Don't know if that makes any difference.

And thank you. Perhaps the CrazyNAS-project should have a "SCF" suffix for 'Sunken Cost Fallacy' :tongue:

I looked through your previous thread and given your drives are mostly from old laptops, I'm going to assume they are all SATA. My brain is tickling me about an issue we dealt with years ago around SAS expanders and SATA SSDs, having to do with a limit on the number of SATA drives per channel. We ran into this issue in the old FreeNAS days when trying to fill a 60 bay, high density JBOD with SATA hard drives. I need to go dig through my archives to see if I can find the ticket on this. I suspect your setup would work if you were using SAS drives.

If there is room on your motherboard, I would suggest trying a 2nd HBA and splitting your backplanes between them.

They are all SATA, that's correct.

Well. The board is ITX and therefore a single PCIe x16 slot, but the board is supposed to support bifurcation - but I'm not entirely sure if that's correct. It is an entry in the BIOS but others (online/around the web) says it's not possible.

I was considering doing just that but then invest in a SFP28 network card - but that's another project intirely.

Two HBA's would just double the power consumption (HBA wise) alone just to be able to put even more disks into the system - consuming even more power.

But. I'll see what I can do - perhaps a firmware upgrade or another crossflash could to the trick.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,112
I have no idea to be honest. Bought the item listed as "LSI SAS 9340-8i ServeRAID M1215 12Gbps SAS HBA IT mode for ZFS FreeNAS unRAID".

Should I read your comment as a suggestion to swap out the controller?
I'm no expert in SAS, but the words "RAID" and MFI" are red flags. Coming after your issues with sas3flash, these suggest that the card is not flashed to IT firmware and running on the good driver.
So, yes, in the absence of better suggestions, getting a plain 9300 (branding should be irrelevant) to be in known and safe territory is a suggestion. But maybe there's a limit on the number of SATA drives anyway.
And thank you. Perhaps the CrazyNAS-project should have a "SCF" suffix for 'Sunken Cost Fallacy' :tongue:
"Practicality" was out as a starter. I guess it's fair that "economical" goes through the window as well.
 
Joined
Jun 15, 2022
Messages
674
The LSI 9300-16i cards have been really solid and fast. The 9206-16e although cheaper have been pretty dicey due to prior usage, plus run slower & hotter.

The 16i heatsink is easier to keep cool (they need airflow).
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
154
Not sure if you've tried this or not, or if your system is capable of this. On our 60, 102, 120, and 180 drive systems, we disable the PCIE slots of the HBAs until after the OS has started booting. This dramatically decreases boot times when dealing with large numbers of drives. Don't know if this helps the overall issue, just a quick tip.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
You could probably get the same effect by just wiping the UEFI extension ROM and avoid messing around with PCIe hot-plugging (even if it's only logical and not physical).
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
I'm no expert in SAS, but the words "RAID" and MFI" are red flags. Coming after your issues with sas3flash, these suggest that the card is not flashed to IT firmware and running on the good driver.
So, yes, in the absence of better suggestions, getting a plain 9300 (branding should be irrelevant) to be in known and safe territory is a suggestion. But maybe there's a limit on the number of SATA drives anyway.

Well. Let's say I did some tests and I'm seriously considering the "value proposition" (I know, sound kinda like an oxymoron at this point - see explanation below).

"Practicality" was out as a starter. I guess it's fair that "economical" goes through the window as well.

Practicality doesn't bother me as much. Of course I like, when thins just work - but then where is the fun in the process and tinkering, I wonder?

I have the time and is privileged enough to have the means for a hobby like this. Economy is always a factor, but at this point the expenses are spread out so far in time they're not really worth considering unless I sum it all up - and I don't feel like doing that. Always makes me anxious. Perhaps there are some guilt in there too, but that's another talk entirely.

The LSI 9300-16i cards have been really solid and fast. The 9206-16e although cheaper have been pretty dicey due to prior usage, plus run slower & hotter.

The 16i heatsink is easier to keep cool (they need airflow).

Something along the lines of this fella? LSI SAS 9300-16I 12GB/S HBA BUS ADAPTER CARD IT Mode 4*SFF-8643 SATA Cable (though I don't need the SFF-SATA cables, but I do need some SFF-SFF cables from the HBA to the backplanes. Only bought two, so I'll need another pair to connect all four backplanes separately I suppose).

Regarding the cooling, I do have som older redial fans from some laptops - yes, I'm that guy - laying around. Could try those out.

Not sure if you've tried this or not, or if your system is capable of this. On our 60, 102, 120, and 180 drive systems, we disable the PCIE slots of the HBAs until after the OS has started booting. This dramatically decreases boot times when dealing with large numbers of drives. Don't know if this helps the overall issue, just a quick tip.

Does it play a role regarding how many drives are connected or am I misunderstanding you?

You could probably get the same effect by just wiping the UEFI extension ROM and avoid messing around with PCIe hot-plugging (even if it's only logical and not physical).

You lost me. I have no idea, what that means?

<--- End of the replys, the latest update -->

Decided to do some preliminary benchmarks. Before anyone starts yelling at me or telling I'm a noob at this... I know, I know.

Wanted to test the pool performance and remember seeing someone in a youtube-video using fio. Saw Linus and Jake do it in this video: This is stupid, but I love it - Linus Home NAS Update 2021

So, I started googling and did have some troubles getting it running. Came up with this command, bashed/bodged/jank'd together from other peoples posts, including some here on the TrueNAS-forum:

"
Code:
fio --bs=1M --direct=1 --directory=/mnt/CN-SSD/CrazyNAS-DS_0/fio/ --iodepth=8 --group_reporting --name=read --numjobs=1 --ramp_time=10 --runtime=60 --rw=read --size=1GiB --time_based
"​
This is not the first one I came up with, but it's where I ended up. Decided to mimic CrystalDiskMark for no other reason than the fact, that I know that program and wanted to be able to compare.

I have no idea, if I did it right or not, but here are all the commands:

TestReadWrite
SEQ1M, Q8T1--bs=1M, --iodepth=8, --numjobs=1, --rw=read, --size=1GiB--bs=1M, --iodepth=8, --numjobs=1, --rw=write, --size=1GiB
SEQ1M, Q1T1--bs=1M, --iodepth=1, --numjobs=1, --rw=read, --size=1GiB--bs=1M, --iodepth=1, --numjobs=1, --rw=write, --size=1GiB
RND4K, Q32T1--bs=4K, --iodepth=32, --numjobs=1, --rw=randrw --size=1GiB<- Does read/write
RND4K, Q1T1--bs=4K, --iodepth=1, --numjobs=1, --rw=randrw --size=1GiB<- Does read/write

Originally did some runs with much higher numbers than I'm reporting here. But since I didn't know what I was doing, I didn't feel confident about posting them, because I honestly didn't know what I was testing.
The above params are based on CrystalDiskMarks default settings, but I could still be entirely wrong about everything. In that case; I'm sorry and apologize in advance.

But, enough talk. The results:

M1 MacBook Pro, internal SSD, 256 GBRead (MB/s)Write (MB/s)
SEQ1M, Q8T122562148
SEQ1M, Q1T117232165
RND4K, Q32T116,716,8
RND4K, Q1T116,616,6

TrueNAS (HDD-based, the one in my sig)Read (MB/s)Write (MB/s)
SEQ1M, Q8T146378567
SEQ1M, Q1T146418811
RND4K, Q32T1157156
RND4K, Q1T1513513

CrazyNASRead (MB/s)Write (MB/s)
SEQ1M, Q8T138255110
SEQ1M, Q1T138214839
RND4K, Q32T1378378
RND4K, Q1T1366365

And no. The CrazyNAS (SSD-based) and TrueNAS (HDD-based) are not swapped.

This confused me a little, but I have some ideas and don't know if there are any merits to it.

First of all: the CPU's.
TrueNAS is running an i3-4170 at 3.7 GHz, that's it.​
CrasyNAS is running a Xeon D-1520, 2.2 GHz base and 2.6 GHz boost.​

Second: the HBA's.
TrueNAS is equipped with two LSI SAS 9211-8i's. Those two control two backplanes each, supplemented by a SSF-4xSATA from the last backplane. So in my mind the traffic is spread out more.​
CrazyNAS is equipped with a single LSI 9340-8i. That one is connected to two backplanes, with an additional two connected via expanders.​
I have no idea how that affects performance, but I would have expected the CrazyNAS pool to perform (not necessarily by a mile, but at least somewhat) better than the TrueNAS pool.

This has made me a bit lukewarm on the project.

I kinda feel like it's not worth the effort, time and money, if I'm better of upgrading or keep using the TrueNAS (for instance, upgrading to newer harddrives and adding 10gig capability).

Any thoughts?
 
Joined
Jun 15, 2022
Messages
674
Something along the lines of this fella? LSI SAS 9300-16I 12GB/S HBA BUS ADAPTER CARD IT Mode 4*SFF-8643 SATA Cable (though I don't need the SFF-SATA cables, but I do need some SFF-SFF cables from the HBA to the backplanes. Only bought two, so I'll need another pair to connect all four backplanes separately I suppose).
Something like that. I look for local sales (not from China) to help minimize the chance of getting a fake, plus look at the sellers other listing to see if they're importing (possible fakes) or parting out old Datacenter equipment. @jgreco put together a helpful guide on this, as has Serve The Home on YouTube (thanks guys!)

You have an excellent project, you're learning useful stuff along with teaching and inspiring others. RAM-"the memory that forgets" was one such project, and look at how that turned out.

LTT did get a TrueNAS SSD server working if I remember, so that is a potential reference regarding what to do and what to avoid/where to improve. (I like to throw at least a small UPS in the mix, clean power does wonders when building edge-case systems.)
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Something like that. I look for local sales (not from China) to help minimize the chance of getting a fake, plus look at the sellers other listing to see if they're importing (possible fakes) or parting out old Datacenter equipment. @jgreco put together a helpful guide on this, as has Serve The Home on YouTube (thanks guys!)

Thanks. I'm considering getting the 16i, just so I can connect four backplanes and hopefully fully populate them. But... look further below.

You have an excellent project, you're learning useful stuff along with teaching and inspiring others. RAM-"the memory that forgets" was one such project, and look at how that turned out.

You're right and I appreciate that you mention it. Sometimes one gets so caught up in their own issues, that you forget other might learn something. I think, that's what you call perspective and sometimes you need someone else to provide it. You did just that for me right now - so, genuine thank you :smile:

LTT did get a TrueNAS SSD server working if I remember, so that is a potential reference regarding what to do and what to avoid/where to improve. (I like to throw at least a small UPS in the mix, clean power does wonders when building edge-case systems.)

IRC they have done a few but the last one was made from SSDs so quick they overwhelmed the CPU and OS. Can't remember the excact config, but I think it's the improved version they are using in New-New Whonnock or what-ever it's called by now.

I'm just hoping to get things working proporly. Then performance can follow.

<--- End of the replys, the latest update -->

The somewhat arbitrary 58 drive limit keeps puzzling me and also the "jbod"-thing seems weird, but I'll be ready to admit I don't know an awful lot about it (JBOD is, to the best of my understanding, when a RAID-controller isn't doing it's RAID-thing).

Anywho.

Started to do some digging and found the spec-sheet for the IBM M1215, which my controller supposedly is flashed into.

From their documentation:

"Support both RAID and JBOD (pass-thru mode with system drives) configurations
Up to 32 drives, including hot spares, are supported in a RAID configuration.
Up to 64 drives are supported in a JBOD (non-RAID) configuration."​
Again. The 58 drive limit puzzles me. 64 drives would have made me make that connection iinstantly, but somehow I'm six devices "short".

After some more digging and hitting my head in search results for the 9341-8i instead, I found this thread: UNABLE TO UTILIZE HBA - FW NOT LOADING

Though not a match for my current issue, the description of the controller not being flashed correctly seems to be.

I think my next step is to see, what I can do about the controller itself.

If it has been flashed into a M1215 in RAID-mode but not IT-mode, perhaps that could explain the 64 drive (in my case 58 (?!) drive) limit.

My thinking is the following: Flash the card into a proper IT-mode (9300-8i, 9340-8i or M1215) and see if that works at all.

My only concern is the following:
  1. Bricking the dang thing, of course
  2. Previously having issues with the UEFI-shell thingies, that was supposed to return the current state of the card.
    • Tried both the sas2 and sas3 version
  3. Having to resort to buying another HBA. Then, perhaps, it would just make more sense to buy a 16i-controller an two additional SFF-SFF cables.
It bothers me, that Broadcoms documentations clearly states: "and up to 1,000 connected devices." (src: SAS3008 I/O Controller)

A search for documentation for the 9340-8i returns pdfs for a 9341-8i. Ok, then. But... so, like, are they the same or?

I do have an SSD prepped with Windows 11 installed in the m.2 slot on the CrazyNAS for other purposes.

Could I somehow flash the controller via Windows (using these drivers I suppose: LSI 9340/9364/9380 SAS RAID Card Driver for Windows - ThinkServer TS150, TS450) or is that a bad idea?

[addendum, forgot this] Maybe trying out this firmare also: Firmware for Avago 9340-8i/ 9364-8i/ 9380-8e SAS3 RAID and RAID 720i/720ix AnyRAID Adapter - ThinkServer Systems

I know it's up tu the OEM or manufacturer who buys the SAS3008 controller to do their own implementation - with whatever limits or features they deem appropriate. But as far as I understand TrueNAS, I'm just interested in getting the controller to do "propor" pass-through. No RAID, no JBOD, just bare metal access to the bits on the platters.

As far as I can tell, the backplanes seems to function properly. I did look into flashing/updating the firmware, but the forum posts and responses I skimmed seemed to advice against it unless one was experiencing unexpected issues or something along those lines.

My problem seems to be the controller itself. So that’s where I'm starting.
 
Last edited:
Joined
Jun 15, 2022
Messages
674
You're welcome. I love "watching along with you."

One thing that keeps coming up in "HBA doesn't work [somehow]" threads is a non-LSI HBA, and not properly flashed to IT mode, both seem to create loads of random problems under TrueNAS (especially SCALE). The answer is always, "Use good hardware" (from the suggested hardware list). The poster either protests and has continued problems or does so and the situation is rectified (though if they're on a gaming system a new problem may reveal itself). On that, I'm clearly biased as @jgreco told me basically that at the start of my build and saved me loads and loads of time and headache.

The other thing I've learned is to have backup hardware to swap out with which helps isolate problems quickly. Since old hardware can be had for under $300 sans drives I'd do that.

'On the frugal': Mainboard: Supermicro X9SCM-F, CPU: Intel Xeon E3-1230

For reference, I think a misconception is the CPU can be swamped by SSD drives, as it's merely the drives can keep up with all requests, so an "under-powered system" isn't necessarily bad, it's just at this point the drives aren't the bottleneck.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Since old hardware can be had for under $300 sans drives I'd do that.

Redundant Array of Inexpensive Servers for the win!

I'm clearly biased as @jgreco told me basically that at the start of my build and saved me loads and loads of time and headache.

Perhaps, Once mentioned, it tends to sensitize people to this unexpected reality, so it probably became much more obvious once pointed out and you started seeing patterns too, I bet. My big claim to fame is that I tend to correlate things I see over and over again, so I picked up on the AMD APU issues a decade ago (resulting in raising the minimum memory requirement to 8GB), but also a lot of these other issues I write about. I sit here and puzzle over things, considering various sides to issues. It doesn't take a genius to recognize consistent trends, but new users don't have that, and it helps for someone to warn them up front. It's also suuuuuper pleasant to have people benefit from listening to the advice.
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
You're welcome. I love "watching along with you."

One thing that keeps coming up in "HBA doesn't work [somehow]" threads is a non-LSI HBA, and not properly flashed to IT mode, both seem to create loads of random problems under TrueNAS (especially SCALE). The answer is always, "Use good hardware" (from the suggested hardware list).

Well. I've got no one to blame for the choise of controller. "LSI 2308 / 3008" and "LSI -8i HBA" was suggested, I did a google search and jumed the gun on a 9340-8i it seems. I have no idea why I chose that one specifically to be honest - perhaps I read something somewhere out of context and took that as gospel.

The other thing I've learned is to have backup hardware to swap out with which helps isolate problems quickly. Since old hardware can be had for under $300 sans drives I'd do that.

'On the frugal': Mainboard: Supermicro X9SCM-F, CPU: Intel Xeon E3-1230

Oh my. That seems almost to cheap and too good to be true. Don't get me wrong, I have enough computers and parts to the degree where my better half is starting to take notice - so I almost dare not to buy more gear, but that looks like a sweet "upgrade" or just spare parts.

Almost makes me wanna switch out the motherboard and CPU in the current system - though the X9 platform is older.

For reference, I think a misconception is the CPU can be swamped by SSD drives, as it's merely the drives can keep up with all requests, so an "under-powered system" isn't necessarily bad, it's just at this point the drives aren't the bottleneck.

Perhaps I just misunderstood - very likely - I think it was this video (LTT): This Server Deployment was HORRIBLE

Redundant Array of Inexpensive Servers for the win!

That's how I feel about laptops.

Readily Available In Dire-or-not-so-dire-but-still-practical-situations.

Not very practical in server related situations however - maybe for IPMI/KVM'ing, when standing next to the server of course.

Perhaps, Once mentioned, it tends to sensitize people to this unexpected reality, so it probably became much more obvious once pointed out and you started seeing patterns too, I bet. My big claim to fame is that I tend to correlate things I see over and over again, so I picked up on the AMD APU issues a decade ago (resulting in raising the minimum memory requirement to 8GB), but also a lot of these other issues I write about. I sit here and puzzle over things, considering various sides to issues. It doesn't take a genius to recognize consistent trends, but new users don't have that, and it helps for someone to warn them up front. It's also suuuuuper pleasant to have people benefit from listening to the advice.

Well. For someone like me, who can relate to what you describe - I'm a teacher - I do appreciate someone like you and everyone else on this forum, who has the bigger picture in mind, helping someone like me out.

It is a times very confusing and to me, it's sometimes a lot of names, product with similar names but different features, different products with the same features and so on. It's a lot to take in and we were all promised Jellyfish performance and practicality from our 2003 Acer Aspire with an IDE to SATA converter.

Not quite, of course, but that’s sometimes how it sounds like when the hook/bait-title promises one how to "Revive your old laptop" or "Make a high performance NAS from your Athlon XP 2500+ and run docker with a gazillion vms too."

[PSA: Tangent, continue at your own discretion]​

And then, you google "TrueNAS", poke around in the forum and it's not long before someone suggest you cough up $1,500 for a motherboard, with 1.25 TB of DDR4 ECC RAM with 22 TB Enterprise drives because anything less is for... you know.​

I'm not saying that, that's how it is. That's not how I see it, but to be honest, I get why some people might consider that some sort of gate keeping mentality or at least experiencing it as such.

We experience in sooo many other instances (I also do photo and video, so, yeah, a lot), and after being promised something for free (high end server performance and feature sets), but now suddenly having to pay for stability, ease of use and so on.​

Like. The difference between a smartphone or point-and-shoot camera and a fully fledged dSLR/mirrorless setup.​

"Wanna take awesome photos? The gear doesn't matter, you can take awesome portraits ind pitch black with your $1 digital carmea! No - don't listen to that! Buy this full-frame monster, with this exotic lens and these semi-pro lights, because you gonna need lights, y'all!"​

But, when you're new to something you're rarely the genuius - or so I like think of it - but, if you've found your way to these forums, you're alse more likely to be on the treacherous step or level of competence.​

The one, where you think you know something, but in fact are a total noob. "But, I was told I could" or "I was under the impression, that X, Y and X, so howcome...?"​

I've been into computers long enough to know, that I most certainly don't know everything and I have zero pretense to know anything of importance about networking or servers.

I used to do overclocking, voltmods, benchmarks, had a phase change cooler at some point too - but that's all a long time ago.

I'm a guest here around these waters and hope that someday I get to show someone around or guide their journey.

Right now, I'm just lost in a sea of SSDs, overwhelmed by all the information I have to process to understand this and tangled into SFF-tentacles (that sounded weird and I have no idea where all these maritime analogies come from).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Not quite, of course, but that’s sometimes how it sounds like when the hook/bait-title promises one how to "Revive your old laptop" or "Make a high performance NAS from your Athlon XP 2500+ and run docker with a gazillion vms too."

Sounds like YouTube content. And unfortunately we have no control over that. It's like that TV show called "Jackass". Stupid stuff for stupid lulz. You won't find much of that around here because we tend to run that stuff out of the forums on a rail.

And then, you google "TrueNAS", poke around in the forum and it's not long before someone suggest you cough up $1,500 for a motherboard, with 1.25 TB of DDR4 ECC RAM with 22 TB Enterprise drives because anything less is for... you know.

I've never seen that in these forums except in the context of very high performance systems, or discussion of the enterprise systems that iXsystems sells as TrueNAS Enterprise. Most of the forum userbase are hobbyists or SOHO users, with the occasional IT guy charged with trying to build an alternative to TrueNAS Enterprise at a bargain price. A few of us do this professionally, but I don't see a lot of discussion of those systems.

We experience in sooo many other instances (I also do photo and video, so, yeah, a lot), and after being promised something for free (high end server performance and feature sets), but now suddenly having to pay for stability, ease of use and so on.

Well, yes and no. See, if you go buy yourself a TrueNAS Enterprise system, that's going to be pricey. If you go buy yourself a NetApp or other commercial system, that will also be pricey. Even if you go to Dell and spec out a current system for NAS, it'll be pricey. But it doesn't have to be pricey. Some of us who do this professionally know tips and tricks to cut costs. Did you know that Backblaze once farmed drives from Costco and got banned for it? Some of us have been doing this kind of thing for a very long time. We can show you how to source drives for substantially less than retail cost. We can show you how to acquire high quality server gear ideal for NAS for a fraction of the original price by selecting older gear that is completely suitable for NAS.

The name "FreeNAS" never meant that there were no costs involved. It meant that iXsystems would allow you to use their commercially developed enterprise quality software platform for free.

Right now, I'm just lost in a sea of SSDs, overwhelmed by all the information I have to process to understand this and tangled into SFF-tentacles (that sounded weird and I have no idea where all these maritime analogies come from).

Well, there are people here happy to answer questions.
 
Joined
Jun 15, 2022
Messages
674
Well. I've got no one to blame for the choise of controller. "LSI 2308 / 3008" and "LSI -8i HBA" was suggested, I did a google search and jumed the gun on a 9340-8i it seems. I have no idea why I chose that one specifically to be honest - perhaps I read something somewhere out of context and took that as gospel.
That's not the first time this happened, it's a bit "too new" (and too expensive) for a solid setup, which is why "the beard strokers" tend to wait for the bugs to be worked out and the drivers and such be really solid (@jgreco is religious about this).

Oh my. That seems almost to cheap and too good to be true. Don't get me wrong, I have enough computers and parts to the degree where my better half is starting to take notice - so I almost dare not to buy more gear, but that looks like a sweet "upgrade" or just spare parts.

Almost makes me wanna switch out the motherboard and CPU in the current system - though the X9 platform is older.
Right??? It's about 2011 vintage which explains the price, but what a solid setup! Tiny too. I'm not telling you what to do by any stretch, the thought is if you have a solid backup system (on the cheap) it's really quick/easy to isolate problems and test equipment while your main system is still running.

Perhaps I just misunderstood - very likely - I think it was this video (LTT): This Server Deployment was HORRIBLE
Please notice he deployed a system his business depends on, it failed, THEN he considers moving off Windows Server and starts looking into compatibility...it's a "let's put the cart before the horse and see how it goes" mentality. Every server they deployed on their own failed horribly to the point hardware vendors wouldn't sell to them unless the vendor managed the install. It makes for popular and entertaining videos--which is his core business, so don't take that as criticism because it's not; it's a thought that you might want to watch LTT videos to learn what not to do, which I believe is the point you're making. So on that, I believe Linux developers got the bugs worked out and that should all be worked out.

Regarding LTT, did you see their storage usage? IN THE RED! BAD! BAD!
bad_policy.png


It is a times very confusing and to me, it's sometimes a lot of names, product with similar names but different features, different products with the same features and so on. It's a lot to take in and we were all promised Jellyfish performance and practicality from our 2003 Acer Aspire with an IDE to SATA converter.

Not quite, of course, but that’s sometimes how it sounds like when the hook/bait-title promises one how to "Revive your old laptop" or "Make a high performance NAS from your Athlon XP 2500+ and run docker with a gazillion vms too."
Yes, for those living in mom's basement with not much else to spend their time and disposable income on--which is fine by the way, more power to them (mom's paying for electricity, so it's all good there). Look, you can do that with a custom-rolled Gentoo system, and if that's your hobby it is "insane" (mild exaggeration, more like "almost unbelievable") what you can do on really old hardware. For a data powerhouse like TrueNAS that's mostly an appliance (though at the datacenter level)...eh, maybe it's not the best OS to experiment with--though, the hardware guide will certainly build an incredibly solid foundation for anything you want to run. (Once it's running, then migrate to TrueNAS, because an insane system deserves an insane OS.)

[PSA: Tangent, continue at your own discretion]

And then, you google "TrueNAS", poke around in the forum and it's not long before someone suggest you cough up $1,500 for a motherboard, with 1.25 TB of DDR4 ECC RAM with 22 TB Enterprise drives because anything less is for... you know.​

I'm not saying that, that's how it is. That's not how I see it, but to be honest, I get why some people might consider that some sort of gate keeping mentality or at least experiencing it as such.

We experience in sooo many other instances (I also do photo and video, so, yeah, a lot), and after being promised something for free (high end server performance and feature sets), but now suddenly having to pay for stability, ease of use and so on.​
Who doesn't dream of a powerhouse system? Gamers put color-changing LEDs everywhere to show off their systems, it's natural. So yes, sometimes gamers come up with expensive gaming mainboards that are optimized for speed (though usually that doesn't go well), or there's the guy who has an industrial-strength need and buys a high-end system and we drool over it, or out of nowhere a home user puts together a system with one-freekin'-hundred SSDs which is spectacularly nuts....uh, hold that thought.... :tongue:

Those systems take the spotlight, but if you read the hardware guide Joe stresses saving as much money as possible and spending wisely, but frugally. That's the same thing I'm suggesting you do--other than the 100 SSDs--that's awesome!

I'm a guest here around these waters and hope that someday I get to show someone around or guide their journey.

Right now, I'm just lost in a sea of SSDs, overwhelmed by all the information I have to process to understand this and tangled into SFF-tentacles (that sounded weird and I have no idea where all these maritime analogies come from).
Nope, nope, no, you're one of us. The transition is imperceptible to the person going through it, but it's an awesome transformation that imbues you with super-human ability and mad electronics skills. (That's how others will see you, anyway.) It's too late, you'll never get the addiction out of your head, you're one of us.

Now Get Back To Work, we're just as hooked on this as you are! :cool:
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The transition is imperceptible to the person going through it,

It's weird, because every now and then you inadvertently impact someone in a way that pays off in spades. I once freecycle'd an HBA at a fellow and didn't really expect much to come of it, but it seems like that may have motivated him to stick around and be a productive community participant. Or maybe the community was just already great enough to cause that outcome anyways. Who knows. Either way, this is a great place to hang out with like-minded crazies. We're always looking for more.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
X10 stuff is only marginally more expensive, but benefits from newer BMC firmware with the HTML5 iKVM. Plus the benefits of the newer generation, which are there, but not huge.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,151
Please notice he deployed a system his business depends on, it failed, THEN he considers moving off Windows Server and starts looking into compatibility...it's a "let's put the cart before the horse and see how it goes" mentality. Every server they deployed on their own failed horribly to the point hardware vendors wouldn't sell to them unless the vendor managed the install. It makes for popular and entertaining videos--which is his core business, so don't take that as criticism because it's not; it's a thought that you might want to watch LTT videos to learn what not to do, which I believe is the point you're making. So on that, I believe Linux developers got the bugs worked out and that should all be worked out.

Regarding LTT, did you see their storage usage? IN THE RED! BAD! BAD!
Recently they seem to have fixed things and started clustering with (I suppose) TC.
Anyway, it's pretty entertaining to watch the whole server saga, spot the issues and see the consecuences.

Learning is a humbling experience, no matter who you are.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
What an absolute treat to read along with the morning coffee!
I can definitely see traces of patience and habit of taking close notes of what works and what doesn't. I bet this has a lot to do with it:

I used to do overclocking, voltmods, benchmarks, had a phase change cooler at some point too - but that's all a long time ago.

I'll chip in my thoughts.
I think it is important to identify "trying various configurations to make it work" that sometimes work, other times, for no obvious reason does not work - as avenues that must be marked with red flags when the intention is to build something reliable and stable.
Just happening to land on some edge case that appear to work during some parts of the testing, is necessary to remind one self about the red flagged road that led there.
You've made a fantastic effort already!

At this point, I would grab the LSI SAS 9211-8i from your other rig, put one channel on each backplane. The purpose is to take the "immediate yank" you the equation (that LSI megalulraid you tried to flash). At least to get a taste of how other parameters work out with the amount of drives. In the long run, I'd avoid cascading and SAS-like features when dealing with SATA drives. One channel per backplane, that's it, no matter the controller.
Once validating the system, I'd have a look at the the performance "needs" and get suitable controllers.

At that stage, I'd remake a series of benchmarks - to figure out just about how many drives per channel that still increase performance. Ie, when the controller caps out. Theoretically, some 6Gbps per channel on the LS9211, but what does that translate to after overhead, and real performance?
Maybe you can make runs with different configurations - how many vdevs/Drives does it take to cap out the LSI-9211? How many can you use per backplane before it you are maxing out the system? I'd suspect it might be one number for say sequential loads, but something different with lots more drives, and random IO focus - due to overhead shenanigans. (Again, remember it isn't clear cut easily measured on a system with ZFS & boatloads of RAM without basically intentionally crippling the benefits of ZFS - which makes the question relevant - to what avail?)

What happens to CPU utilization when blasting through boatloads of vdevs, even at SAS2 speeds (the LSI-9211)? Does the number of drives turn out to matter in your context?

Measure power consumption to see what amount levels you reach up to? I don't think SSDs and HDDs have the same "power spike characteristics" during "load" (sorting out the obvious spin-up of HDDs), which may not matter the slightest bit with 4 drives, but make a different story with 100 drives, and a budget PSU...

Funny, I did a project sharing the essence of this build a few years back. Gathering a much more modest compared to yours, 8-9 SSDs, and put them into an all SSD server, for basically the same reasons as you :)

I however ran into various other problems (which inspired my questions above I just realized), particularly related to CPU utilization (at some point here I think I also did labs with virtualization that also made big differences with total system constraints and various other considerations.. -hence the motivational speech about red flags..)

I believe I also ran into a MASTODONTICAL cap of the LSI-9201-16i (IIRC similar if not the same chip as the 9211, with some tweaks to make it hold double the amount of drives)

Also - this quote goes to my little hall of fame of fantastic quotes (in signature) :))
"Practicality" was out as a starter. I guess it's fair that "economical" goes through the window as well.
 
Last edited:

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Sounds like YouTube content. And unfortunately we have no control over that. It's like that TV show called "Jackass". Stupid stuff for stupid lulz. You won't find much of that around here because we tend to run that stuff out of the forums on a rail.

That seems on point.

I've never seen that in these forums except in the context of very high performance systems, or discussion of the enterprise systems that iXsystems sells as TrueNAS Enterprise. Most of the forum userbase are hobbyists or SOHO users, with the occasional IT guy charged with trying to build an alternative to TrueNAS Enterprise at a bargain price. A few of us do this professionally, but I don't see a lot of discussion of those systems.

Well. I didn't mean as literal as that. My point was, that someone who knows little to nothing about hardware, might find a gap between "I wanna use my laptop as a server" ($0) and "might we suggest you invest a little?" (>$0). I'm not saying that it's my impression, that people around here are unfair in their suggestions or are gate keepers - I'm very sorry, if it came of as that.

The name "FreeNAS" never meant that there were no costs involved. It meant that iXsystems would allow you to use their commercially developed enterprise quality software platform for free.

I think that's perhaps where some of the discretion (I don't even know, if that's the right word) comes from.

When I built my first server, I used a desktop motherboard, i7-2600S, with 8 GB of RAM and four laptop hard drives - just to test things out. Get a feel of FreeNAS and to see if that was an avenue I could follow.
At the time we had a QNAP 4-bay NAS and while it did serve us well, the 4 TB capacity and slow SMB performance was beginning to bog us down.

I wanted to see if I could built something myself and came across FreeNAS. I think it was right before version 11, because one of the very first things I did with the finished system was the update from 10 to 11.

12735588_10207312485785050_1915606190_n-jpg.10650


This is the temp-system, still rocking the laptop hard drives but this time with 6 onboard SATA-connectors.

Finished the built in February/March '16 I think and a year later we moved from the apartment into our house.

143866377_700389134176040_2118728013018823082_n.jpg

This is the old TrueNAS box - spec. are in my sig.

Well, there are people here happy to answer questions.

This matches my experience as well. It's nice to see, people online on a "specialist forum" who actually helps out. As some of you might know, that isn't always the case on other forums.

That's not the first time this happened, it's a bit "too new" (and too expensive) for a solid setup, which is why "the beard strokers" tend to wait for the bugs to be worked out and the drivers and such be really solid (@jgreco is religious about this).

Well. It does make sense to have a kinda "if it ain't broke" and a somewhat conservativepersuasion when it comes to stuff like this. While it is fun to tinker, it isn't fun, when suddenly your data is gone - because, "raid is not a backup!" and I chose to pick up what-ever hardware other people (who know a lot more about this than me) couldn't get behind/support because they legitimately can't get behind it (for incompatibility reasons or other).

Right??? It's about 2011 vintage which explains the price, but what a solid setup! Tiny too. I'm not telling you what to do by any stretch, the thought is if you have a solid backup system (on the cheap) it's really quick/easy to isolate problems and test equipment while your main system is still running.

Well. I might go in that direction, but as someone else points out; that HTML5 iKVM is awesome. Right now, if the TrueNAS is borked or something, I can't even begin to explain all the hoops I have to go through because of the SuperMicro IPMI Java thingie.

It makes for popular and entertaining videos--which is his core business,

Trust me, I also mainly take it as being just that. I imagine I would feel the same, if he took vaseline to the front lens element of a cinema lens on a RED camera and ranted about it not being "pro" enough in a dark room, compared to a point and shoot under direct sunlight.

I just don't know enough about servers or production environments to know, when he is doing the equivalent to that.

And no (for anyone else reading); I'm not implying Linus (or any of his writers would even suggest that) but I also wouldn't hold it above him to do just that for the lulz... and to be honest. I would watch - for the entertainment. Not gonna lie.

(Once it's running, then migrate to TrueNAS, because an insane system deserves an insane OS.)

TrueNAS (and FreeNAS before that) was/is an incredible OS. Once you get through potential hardware issues, the "appliance"-thing does, well, apply. It has very much been "set and forget" for a large portion of the time for the current server - but that was also the mindset going in.

Not to be a "binary"-person, but sometimes I see it as either:
  1. Cheap/easy/accessible setup but continuous maintenance (updating, checking in, what-evs)
  2. Hard to setup / complicated but after that; minding its own business
Chose one.

or there's the guy who has an industrial-strength need and buys a high-end system and we drool over it, or out of nowhere a home user puts together a system with one-freekin'-hundred SSDs which is spectacularly nuts....uh, hold that thought.... :tongue:

This one is possibly going over my head. Are you implying something? :cool:

Those systems take the spotlight, but if you read the hardware guide Joe stresses saving as much money as possible and spending wisely, but frugally. That's the same thing I'm suggesting you do--other than the 100 SSDs--that's awesome!

To be honest, I was never going to save as much as possible - but I don't like throwing money out the window. If it is bonkers, but in any shape or form usable or practical - I'm in!

But. I imagine, even if I did strafe a little and - for instance - got the wrong HBA for this project, I can still use that HBA for other projects. It's not broken or anything.

Nope, nope, no, you're one of us. The transition is imperceptible to the person going through it, but it's an awesome transformation that imbues you with super-human ability and mad electronics skills. (That's how others will see you, anyway.) It's too late, you'll never get the addiction out of your head, you're one of us.

Now Get Back To Work, we're just as hooked on this as you are! :cool:

Well. I live in Denmark and as I have implied earlier, I'm a teacher. Our students had their finals this week and I'm not in any hurry. This is very much a side project (among a somewhat embarrassing long list of work in progress ideas). Other than that?

I'm on it, Sir! :cool:

It's weird, because every now and then you inadvertently impact someone in a way that pays off in spades.

I don’t know if you're pointing to an in-joke or something along those lines, just wanted to say, that I know what you mean.

A friend of mine is doing wildlife photography and he is very good at it. But as with many other things in life; sometimes you just want to shake it up a little. He wrote me on Messenger and aired his frustrations with needing new ideas. I had just recently gotten into astrophotography and showed him some images I had captured.

He was in and two weeks later we're both standing on a hill in the middle of nowhere, zipping beers and shooting stars.

You never know, what branch or twig, someone else hangs on to and it might take you by surprise.

As a teacher I sometimes experience a student - usually one of the quiet kids, who draws - that picks up on something and the next day, they show you something amazing they did. Drawings, DIY bluetooth speakers or that movie they saw, that they'll think you're also into. I love it!

X10 stuff is only marginally more expensive, but benefits from newer BMC firmware with the HTML5 iKVM. Plus the benefits of the newer generation, which are there, but not huge.

Both my servers are both "X10"s, but have different IPMI-solutions? How do I know, which to get if I follow WI_Hedgehog's advice and look into "last gen" stuff? Are that HTML5 iKVM thing even possible on older boards?

Learning is a humbling experience, no matter who you are.

Well. Some of us takes pride in liking to learn new stuff. Others, not so much - but they're very unlikely to be on these forums anyway, eh? :wink:

What an absolute treat to read along with the morning coffee!
I can definitely see traces of patience and habit of taking close notes of what works and what doesn't. I bet this has a lot to do with it:

Perhaps. Sometime, somewhere, I read: "The only difference between screwing around and science is writing it down." - something along those lines. Imagine my delight, when I found out who said it on camera. Anywho.

I've always been into doing that. I'm not very organized - shocker, I know - but I like to be systematic about things like this. Makes work much more effecient and helps to reveal patterns or point one in the right / other directions, when you've hit a road block.

It is quite possible it harks back the days when I did overclocking - but I think I remember I was into doing it before that. But, you're probably right, that doing overclocking helped cementing this very approach.

You've made a fantastic effort already!

Thanks. But a lot of it is also because of your guys helping me out - no doubt about it :smile:

At this point, I would grab the LSI SAS 9211-8i from your other rig, put one channel on each backplane.

I'd very much like to not do that. The 9211's are sitting in a 'live' system and I'd rather not dare to anger the "IT"-gods, by yanking hardware from a working system. You know: If it ain't broken. But I get what you're trying to have me do and I have considered it.

Right now, I'm more inclined to invest in another HBA - possibly that 9300-16i ting - but I'll have to do some more research.

At that stage, I'd remake a series of benchmarks ... which may not matter the slightest bit with 4 drives, but make a different story with 100 drives, and a budget PSU...

That's my plan as well. Originally I had planned it as such:

Filling / populating one backplane with SSDs, create a stripe, Rz1 or Rz2 and create a pool.
Fill another backplane, create a stripe/Rz1/Rz2, add vdev to pool.
And so on.

In that way, I'd be able to keep adding vdevs to the pool.

I'm not that concerned with redundancy for this project. It's nowhere near mission critical and I honestly had problems to come up with ideas for what I could do with the CrazyNAS - not unlike Linus' line in that video I linked to earlier: coming up with crazy ideas.

4c/8t do limit it in certain scenarios. But 128 GB RAM and 14TB combined (RAW) SSD storage capacity must have some applications.

... scratch disk for Adobe Premiere Pro? :tongue:

I have no idea. If storage capacity was a concern, I'd sprung for some 4 TB SATA SSDs a long time ago. Currently they're priced at around $240-$300.

Just two of those are already more capacity than the entire CrazyNAS currently (right over 6 TB, with 58 drives).

So. If not for the storage capacity, then for the performance?

Well. Right now, it seems the old TrueNAS box is perhaps just as fast as the CrazyNAS - and I have no idea why. If it was RAM-speed I was testing (was using fio) the newer server should still be faster, right?

Funny, I did a project sharing the essence of this build a few years back. Gathering a much more modest compared to yours, 8-9 SSDs, and put them into an all SSD server, for basically the same reasons as you :)

Well. Let's see where I end up. We might both learn something then. You're more than welcome to inject / suggest ideas for use cases or benchmarks - and no, you don't have to justify them, other than what you'd have done with +100 SATA SSDs in a server :cool:

Also - this quote goes to my little hall of fame of fantastic quotes (in signature) :))

I'm starting to think, that I should've started collecting quotes from the beginning with this project :tongue:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Both my servers are both "X10"s, but have different IPMI-solutions?
That doesn't sound right. All X10 boards I know use the same ASpeed 2400 with AMI MegaRAC plus Supermicro skin solution, and all X10s (including the Avoton/Rangeley series A1 boards) have the same BMC firmware.

Could it be that some of them are just not on the latest firmware version?
 
Top