Huge capacity and cost effective NAS for iSCSI

Celizior

Cadet
Joined
Dec 23, 2022
Messages
4
Hey guys,

I'm on my way to build a TrueNAS with huge capacity and very cost effective. It's main job gonna be to provide iSCSI LUN (probably with multipath) to some hyper-v for versatile VM like storj, user data, computer backup from family and such others things.

I would like to show you which component I want to buy, and to know if there is a few things I should know, based on your experience that I don't have

Case : Inter-Tech 4U-4724
Motherboard : Supermicro X10SRL-F (refurbished but 600€ from aliexpress)
CPU : between Xeon E5-2640 v3 (90w/35€) and Xeon E5-2690 v3 (135w/80€) (refurbished from aliexpress)
RAM : samsung 8x32GB DDR4 REG ECC 2133 MHz (refurbished from aliexpress), these CPU doesn't support higher speed
SSD : 2 SSD for boot drive, I don't chose yet, but something simple like sata, connected to the motherboard sata port
HBA + HDD : LSI 9305-24i for 24 toshiba 16 TB drive MG/Enterprise, used into 3 different RAIDZ2
NIC : 10Gtek X520-DA1 (1x SFP+) or X520-DA2 (2x SFP+), I don't think keeping 1Gbps is a good idea for such storage size
GC : doesn't seem required as the motherboard has a graphic chipset
PSU : Inter-Tech ASPOWER R2A-MV0700 (redundant and capable of 30A on 5V and 58A on 12V)

I gonna see physically the server, something like once a month as I work far away. So it's important to me that in case of trouble the server can run several weeks. That's why RAIDZ2, redundant PSU and a motherboard with IPMI

I had a look on Storinator components (thanks LTT) but the less expensive CPU is Xeon Bronze 3204, which I find at 220$ but performance is quite of a joke (4843 on passmark vs 11326 for E5-2640 v3 with nearly same TDP). Like others components, I'm open to any other CPU/motherboard as long as it has IPMI and is Intel based. I plan to use as much common component as possible with my hyper-v to simplify maintenance

Regards,
Celizior
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Case : Inter-Tech 4U-4724

That's crap. Morons used 120MM fans on the main fan wall, which will struggle and burn out eventually, and it would be vastly preferable to use an SAS expander backplane based system for 24 drives. Is there some reason you skipped the Supermicro CSE-846BE1C and an LSI 9300-8i?

HBA + HDD : LSI 9305-24i for 24 toshiba 16 TB drive MG/Enterprise, used into 3 different RAIDZ2

That's not going to end in a happy way.


refurbished from aliexpress

Those words send a chill down my spine.


NIC : 10Gtek X520-DA1 (1x SFP+) or X520-DA2 (2x SFP+),

This appears to possibly be a knock-off card. Suggest avoiding it.

SSD : 2 SSD for boot drive, I don't chose yet, but something simple like sata, connected to the motherboard sata port

Since you mention

I gonna see physically the server, something like once a month as I work far away. So it's important to me that in case of trouble the server can run several weeks.

I would point out that using dual SSD's for boot doesn't get you redundant boot. You can consider something like


If you are expecting to have to visit your server once a month, you're doing it all wrong. I haven't visited our primary data center in six years, and our newest one in at least two. Part of it is accomplished by making sure you use high quality hardware, which is part of why I would advise against that poorly designed Inter-Tech chassis. The Supermicros come with industrial grade fans that should have a very long lifespan. Providing in-pool spare drives is also a good idea, because with iSCSI, you do NOT want to be using RAIDZ, but rather mirrors. It gets expensive to run three-way mirrors, so two-way mirrors with several spares is a good alternative.
 

Celizior

Cadet
Joined
Dec 23, 2022
Messages
4
Hey,

Thanks for you comment

Is there some reason you skipped the Supermicro CSE-846BE1C and an LSI 9300-8i?
I didn't know this enclosure but it's like 1800€ and inter-tech (case + PSU) cost 900€, which is the price of more or less 2 weeks of my time.
And I expect being able to fix any problem in less than 2 weeks (especially coming from the case or PSU), so ‍♂️

LSI 9300-8i would consum 3 PCIe for 24 disks. I have space on such moba (2 PCIe 16x + 5 PCI 8x)
There is a graphic chipset, one NIC. 6 PCIe would still be free ... what would be the benefit of 8i over 24i except PCIe bandwith ?
Is it really this important to separate the 3 RAIDZ2 into 3 SAS card, considering I won't use any hardware raid feature or other ?

Those words send a chill down my spine.
I'm used to live dangerously xD
Money is maybe more important in my scenario that fiability. IPMI is maybe a luxury here, but it's also very confortable as I can debug something from my bed 150km away from the NAS

This appears to possibly be a knock-off card. Suggest avoiding it.
Good to know, I gonna keep that in mind

There is no boot partition check while the server is up ?
I remember at work when windows server doesn't reboot because of broken boot with raid 1 (so the boot is broken on both disks)
That's for such things I want IPMI xD
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I didn't know this enclosure but it's like 1800€ and inter-tech (case + PSU) cost 900€, which is the price of more or less 2 weeks of my time.

Variations on the CSE846 are usually available on the used market for a fraction of the new retail price you appear to be quoting. I'd say about half the price of your Inter-Tech.

LSI 9300-8i would consum 3 PCIe for 24 disks.

With the chassis I recommended, it would only need one slot, and actually only half the card (one SFF8643) at that. You do need an SAS expander backplane chassis for that to be possible though.

what would be the benefit of 8i over 24i except PCIe bandwith ?

Cost. Because you can use the 8i to drive many hard drives using an SAS expander, the only reason I can think of to use a 24i is if you were going all SSD or something like that. With an SFF8643, you get 48Gbps of bandwidth to the backplane or 96Gbps if you go 2xSFF8643 wideport. If you only have 24 drives, that works out to 2-4Gbps of bandwidth per drive, which exceeds what the drives are capable of.

Is it really this important to separate the 3 RAIDZ2 into 3 SAS card,

No, you can attach them however it makes sense to you to do so. I was simply advocating for a simpler, less expensive, higher quality option than the Inter-tech.

There is no boot partition check while the server is up ?

If you are asking if there is a mechanism to verify the boot partition, no, there isn't. ZFS is responsible for maintaining the integrity of the system boot partition. The problem is that your average PC is as dumb as a rock and may not take advantage of a redundant partition in the event of problems.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I'm on my way to build a TrueNAS with huge capacity and very cost effective.
Those are two strongly conflicting goals, at least in the sense that most folks understand them. Because many people say cost effective, when they actually mean cheap or below a certain amount of money. So what exactly do you mean with huge capacity and what with cost effective?
It's main job gonna be to provide iSCSI LUN (probably with multipath) to some hyper-v for versatile VM like storj, user data, computer backup from family and such others things.
While those details rule out a lot of scenarios, they are far from being specific enough for a quantitative verdict on your setup. How many VMs, what number of transactions per second, etc? I know this is not an easy question to answer, but the more information you can provide, the better a response you will get.

I would like to show you which component I want to buy, and to know if there is a few things I should know, based on your experience that I don't have
What was your thinking process to determine that those are suitable components? I ask not to question you, but to better understand where you come from.
Motherboard : Supermicro X10SRL-F (refurbished but 600€ from aliexpress)
That looks totally overprized to me. And I would certainly not buy something like this from AliExpress, unless it is national seller that dismantles used data center gear.
CPU : between Xeon E5-2640 v3 (90w/35€) and Xeon E5-2690 v3 (135w/80€) (refurbished from aliexpress)
The TDP is pretty much irrelevant for overall power consumption. Just FYI.
HBA + HDD : LSI 9305-24i for 24 toshiba 16 TB drive MG/Enterprise, used into 3 different RAIDZ2
Without a proper backplane this will be cabling hell. Does your PSU supply an adequate number of connectors? Splitters come with a number of issues, so you need to be careful there.
NIC : 10Gtek X520-DA1 (1x SFP+) or X520-DA2 (2x SFP+), I don't think keeping 1Gbps is a good idea for such storage size
The NIC speed has zero correlation with the storage size. So, to be blunt, this is just a wrong idea. The required speed is purely determined by your workload and its requirements.
I gonna see physically the server, something like once a month as I work far away. So it's important to me that in case of trouble the server can run several weeks. That's why RAIDZ2, redundant PSU and a motherboard with IPMI
While those factors will help, depending on the exact requirements/expectations they may not be sufficient.
I had a look on Storinator components (thanks LTT)
If LTT is anything then, at least in my opinion, they are not qualified to say anything about TrueNAS. The videos I have seen simply contain too much wrong information. After all, they are an entertainment channel.
but the less expensive CPU is Xeon Bronze 3204, which I find at 220$ but performance is quite of a joke (4843 on passmark vs 11326 for E5-2640 v3 with nearly same TDP).
Without having looked at the details, a NAS is usually not limited by the CPU performance but IOPS.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
This appears to possibly be a knock-off card. Suggest avoiding it.
I have some 10gtek stuff (DACs mostly), they shoudnt be a complete knockoff, although I probably would probably never buy a NIC from them, instead just getting intel used (assuming I can find one that doesnt appear to be fake, bustards). I would trust them more than probably a good chunk of the iffy ebay sellers.

a (used) supermicro chassis is WELL worth it, although a 4U often has brutal shipping, depending on where you are (canada, ugh). an expander backplane is a joy to wire up. the rats nest your build implies is just terrifying.
the specific chassis @jgreco recomends is SAS3, which is more than you would need for platters, however, platters and VM's might not be what you want. SAS2 24 and 36 bay chassis might be better price points.
 

Celizior

Cadet
Joined
Dec 23, 2022
Messages
4
Somebody know as website named lambda-tek.com ?

I don't know them, but they have a X10SRL-F at 400€ tax included, and have several suitable CPU/RAM, chip from UE
They have some good advice on trust pilot ‍♂️

So what exactly do you mean with huge capacity and what with cost effective?
Huge capacity is 24*16 TB (but 8*16 TB to begin) and total cost 2000/2500€ without disk

How many VMs, what number of transactions per second, etc?
Should be something like 20-30 VM, no database, a few VM as fileserver for desktop user, a lot of backup. I don't expect to deal with a lot of IO, just something a little bit more powerfull than a synology

What was your thinking process to determine that those are suitable components?
If it's a standard component, easy to replace with something else. The motherboard died ? I want to be able to replace it with another one from an other manufacturer. Specific stuff like stupid form factor in proprietary case is a red flag to me

Without a proper backplane this will be cabling hell. Does your PSU supply an adequate number of connectors? Splitters come with a number of issues, so you need to be careful there.
The case include backplanes with 4 SAS (so no spliters needed) and molex (6 on PSU and 6 on backplane) + 2 sata

While those factors will help, depending on the exact requirements/expectations they may not be sufficient.
I won't use it as a professionnal NAS with employees or else, if it's down for one week it's down for one week

Maybe I shouldn't have speak about LTT xD
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
400 bucks sounds expensive for an X10SRL-F. That's not far off what it would've cost new when it released in 2014ish.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
400 bucks sounds expensive for an X10SRL-F
I was thinking the same, but the european markets are very different sometimes
The case include backplanes with 4 SAS (so no spliters needed) and molex (6 on PSU and 6 on backplane) + 2 sata
these numbers do not add up. 4 minisas connectors is 16 drives, not 24. a single minisas is 4 lanes.
this connection will completely waste about 90% of your controller. it will work, but it's very far beyond just overkill.
(says the guy using the exact same HBA for 3.5 12xplatters + 24 2.5)

part of why I know some of this is I was looking at the norco rpc-4224 4U 24 bay chassis (which no longer exists) many years ago.
.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I was thinking the same, but the european markets are very different sometimes
We can find used X11SSL-F for less than 400 euros on eBay (I payed mine 340 6 months ago iirc?). Better mobos go proportional to that.
Heck, X11SSH-F now go for 300!
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I've seen several vendors offering X11SSL-F units for around 150€. The X11SSM-F and X11SSH-F are rare, but I wouldn't expect a huge premium for either.

Yes, I've seen many crazy overpriced motherboards on eBay, but they're always Chinese sellers. I'm not exactly sure what they've been drinking in China, but they should cut it down, because they're routinely asking for 25+% more than European distributors' prices (new vs. new, apples to apples) - plus the hassle of shipping from China and customs and stuff, plus their dubious reputation. Crazy stories like A2SDi-4C-LN4Fs from China on eBay for over 500€, when local distributors are all charging ~400€.
 

Celizior

Cadet
Joined
Dec 23, 2022
Messages
4
these numbers do not add up. 4 minisas connectors is 16 drives, not 24. a single minisas is 4 lanes.
My mistake, it's a LSI 9305-24i wich has 6 SAS ports and the backplane has also 6 SAS ports

I gonna still search for a better price for this X10SRL-F

I just had a look on X11SSL-F, X11SSM-F and X11SSH-F on lambda-tek.fr, they are around 200€ but support only ECC UDIMM wich is much more expensive that ECC RDIMM
I had a look on others feature on the X10SRL-F, the features can explain the price (256 GB ECC RDIMM vs 64 GB ECC UDIMM, CPU 4 physical core vs 22 physical core, more PCIe) which are overkill for my needs.

Still on lambda-tek.fr I find
  • DDR4 ECC UDIMM 16 GB 2400MHz : 146€
  • DDR4 ECC RDIMM 16 GB 2400MHz : 50€
 
Last edited:
Top