DIY all flash/SSD NAS - not going for practicality

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Hi everybody

First of all; not going for practicality - at all. Just to get it out there.

With that out of the way:

I recently acquired a bunch of 120-128 GB SSDs. Around 40 pcs to be exact - a mix of Samsung, Sandisk and a few Kingston. Mostly Samsung - with the possibility of more (and some higher capacity) on the way. These are "decommissioned" disks from a bunch of educational laptops (mostly Lenovos I think). Since they came with BitLocker enabled they have been reset (zeroes) and are ready for use. What to do?

First thing that came to mind; what about a bonkers setup for ZFS? A rough estimate puts usable space around 4 TB. Considering the fact, that a 4TB Samsung SSD can be acquired for around $350-400 (2,230 DKK) there's absolutely no economical reasoning behind this idea.

So.

I began to look for server chassis, disk array, jbods and got lost.

My idea at the moment hovers around the idea of a wall mounted NAS with plexi glass to show of the device.

I've considered to aim for a standard ATX form factor PSU, server'ish motherboard with desktop cooling, two 24-device SAS backplanes, some HBAs and built-in or add-on 10 GBE.

I have no idea what parts go well together. What the requirements is to CPUs when it's all flash or how much memory is practical.

A fun idea that came to mind, was an ISCI-volume for video editing and other stupid stuff. But, if it ends up costing in 4 or 5 figures I'm reconsidering the idea.

Napkin math (quick search on eBay):
2x BPN-SAS-216A 70$/pc = $140
12x SFF-8087 cable $10/pc = $120
3x 16i HBA $270/pc = $810
or
6x 8i HBA $50/pcs = $300

Total for just connecting the SSDs to something: $500 - $1,000

Still need power supply, motherboard, CPU (E5 for the lanes?), RAM and possibly a 10 gig NIC (Intel/Mellanox).

Perhaps I'm overlooking something obvious (buying a used SuperMicro/Dell-whatever), but so far I've only managed to find really expensive dual CPU setups, noisy setups, very space consuming setups, something exotic I'm not qualified to make any meaning of or a combination of the aforementioned.

Am I off in the deep end or?

What do you think? I'm not expecting you to find all the parts and doing all the work for me, but perhaps just point me in some direction.

I'm not afraid of doing some DIY'ing, soldering, rerunning cables and whatnot.

TL;DR - Have a bunch of SSDs. Any ideas?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, so if you wanted to build something for the wall, you probably want to reduce cabling. I am *specifically* aiming this at "wall mounted with plexiglass" because that grabbed my attention enough to think about this for a few moments. ;-) I *love* the idea. Skip the case.

Use of SAS expanders seems like a good idea. This reduces the maximum throughput you can expect to see, but a 4 lane SFF-8087 is 4x6Gbps = 24Gbps, and two of them would be 48Gbps, so that probably exceeds what you'd have for network by a good bit. I'm not sure it matters.

If you could find some backplanes, you could screw them with right angle brackets to wood or whatever other backing you have on the wall, so that you could "drop" SSD's into the slots like pieces of toast into a toaster.

A Supermicro BPN-SAS2-216EB is $75 used on eBay and handles 24 drives. Get two. Take an SFF8087 from each one and hook that up to an LSI 2308 HBA, or possibly a 3008 if you wanted a bit better performance (but you'd need 8643-8087 cabling). Cables are $25/ea, HBA is $35. I think I've just attached your 48 drives for $235, minimal cabling. But you have to figure out power.

Try to find a later gen E3 board that can take 64GB, or maybe something like an X10SRL that can handle an E5-1650v3 or better (best CPU for the job IMO).
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
You have a supply of boot drives for the rest of your life—and some reincarnations thereafter! :grin:

With SAS backplanes and expanders, you could run everything out a single SAS cable, so no need for all the PCIe lanes of a Xeon E5. Still the setup may end up costing more than just 3*2 TB in raidz1 for the same 4 TB capacity, and I don't even dare to think about the cost of a custom wall-mounted Plexiglas™ case, unless you can do it yourself.
Definitively no practicality.

On the other hand, if you do it, for the fun and/or for the glory of Scandinavian design, I want to know beforehand of the art fair where it will be exposed! (And then a "Xeon Platinum" as CPU should be the perfect match for the wallet of the prospective buyer…)
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Use of SAS expanders seems like a good idea. This reduces the maximum throughput you can expect to see, but a 4 lane SFF-8087 is 4x6Gbps = 24Gbps, and two of them would be 48Gbps, so that probably exceeds what you'd have for network by a good bit. I'm not sure it matters.

A Supermicro BPN-SAS2-216EB is $75 used on eBay and handles 24 drives. Get two. Take an SFF8087 from each one and hook that up to an LSI 2308 HBA, or possibly a 3008 if you wanted a bit better performance (but you'd need 8643-8087 cabling). Cables are $25/ea, HBA is $35. I think I've just attached your 48 drives for $235, minimal cabling.

Just to be sure I'm understanding this: Are you indicating, that I can use a single 8087 cable from the backplane into the HBA and connect them both to one HBA? Or are the expanders assumed in this scenario?

I'll be researching the E5-1650v3 route to be sure, but that suggests going with DDR4, right?

Of course I'll provide images of the finished project and thanks a lot for your input - it's very much appreciated :smile:


With SAS backplanes and expanders, you could run everything out a single SAS cable, so no need for all the PCIe lanes of a Xeon E5.

On the other hand, if you do it, for the fun and/or for the glory of Scandinavian design, I want to know beforehand of the art fair where it will be exposed! (And then a "Xeon Platinum" as CPU should be the perfect match for the wallet of the prospective buyer…)

As far as I understand, an expander will take break out four lanes/cables from one host device connector. But it seems you know of a device/expander that can do 24 devices/connectors?

The art fair will be the wall in my small home office. The design is not intended for sale - I think :tongue:

Also, very much thank you to you too :smile:
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
well if you go down this route, please go for insanity.
Add some Optane für Metadata, Some NV Devices or Optane Memory for SLOG. Oh and Optane Memory for L2ARC and Metadata vdev please.
I've a few Xeon Platiniums for sale... add like 1TB of Optane Memory and as well as 512GB of Memory.
Just go insane.

Otherwise go sane if it's wallmounted I would go for like 3 or 4 2TB SSDs brand new some cheap old Xeon / Epyc CPU with real ECC support and be done for the day.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Just to be sure I'm understanding this: Are you indicating, that I can use a single 8087 cable from the backplane into the HBA and connect them both to one HBA? Or are the expanders assumed in this scenario?

I'll be researching the E5-1650v3 route to be sure, but that suggests going with DDR4, right?

Right. You could go with something older, which could be less expensive.

As far as I understand, an expander will take break out four lanes/cables from one host device connector. But it seems you know of a device/expander that can do 24 devices/connectors?

The backplanes I indicated are 24-bay 2U backplanes from a Supermicro chassis. An SAS expander is the SAS equivalent of an ethernet switch, which allows you to connect many SAS/SATA devices through a single cable... in the same way that an ethernet switch lets you reach many devices on your network without having to run a cable from your PC to each device. This is how you can get a rack full of drives attached to a server where you would otherwise run out of HBA capacity.

Specifically thinking something like:

https://www.ebay.com/itm/402649317508

The big long board is the backplane proper, and has power and data connectors on one side. Face that "down". The other side will have 24 SATA/SAS connectors. Face that "up" and slot in your SSD's.

The smaller board piggybacked on the backplane is an SAS expander designed for that backplane. Some models have two of these; you only need one. On the SAS expander, you will see three silver SFF-8087 connectors. These are used to attach to the HBA on the host. You only need to connect one to the host. You can also daisy-chain. The SAS Primer talks about some of this.

You could theoretically use a single LSI -4i HBA that connects a single SFF-8087 to your first SAS expander, and then another cable from the first SAS expander to the second. This is not optimal but it would work fine.

Your better option is to use a single LSI -8i HBA that connects a SFF-8087 direct to each SAS expander.

The HBA itself does act as a choke point, and SAS expanders add mild latency too, and if we were going for ultimate crazy and expense was not a consideration, this is not the absolute most performant solution. However, it is VERY reasonable for your little crazyNAS experiment IMHO.
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
well if you go down this route, please go for insanity.
Add some Optane für Metadata, Some NV Devices or Optane Memory for SLOG. Oh and Optane Memory for SLOG.
I've a few Xeon Platiniums for sale... add like 1TB of Optane Memory and as well as 512GB of Memory.
Just go insane.

Otherwise if it's wallmounted I would go for like 3 or 4 2TB SSDs brand new some Xeon / Epyc CPU with ECC and be done for the day.

Love how you went from the Optane/Platinum/half-a-terabyte-memory to "3 or 4 TB SSDs + Epyc" as the sensible "haha, but for real..." :tongue:

The smaller board piggybacked on the backplane is an SAS expander designed for that backplane. Some models have two of these; you only need one. On the SAS expander, you will see three silver SFF-8087 connectors. These are used to attach to the HBA on the host. You only need to connect one to the host. You can also daisy-chain. The SAS Primer talks about some of this.

Your better option is to use a single LSI -8i HBA that connects a SFF-8087 direct to each SAS expander.

I did see those backplanes with a black connector. I just assumed it was some sort of proprietary PCIe-like socket for direct-motherboard connectivity. This makes more sense to me now.

I have to admit, it seems almost too incredibly simple, that I can connect two backplanes to one HBA with just one cable each.*

You know. Almost to perfect to be true. I'd love, if it's that easy! :cool:

The HBA itself does act as a choke point, and SAS expanders add mild latency too, and if we were going for ultimate crazy and expense was not a consideration, this is not the absolute most performant solution. However, it is VERY reasonable for your little crazyNAS experiment IMHO.

Amazing name! :grin:

I hereby christen this project "CrazyNAS experiment IMHO" - jgreco

* - I do have a 'spare' i7-2600S system on the shelf. Would be perfect for setting things up and playing around while waiting for hardware. Should be possible, if all it requires for barebones experimenting/testing is a single PCIe for the HBA.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I did see those backplanes with a black connector. I just assumed it was some sort of proprietary PCIe-like socket for direct-motherboard connectivity. This makes more sense to me now.

I did notice that there seem to be a few of these floating around with NO SAS expanders installed. These are useless to you (and useless in general, unless someone was just looking to replace a board with a broken SAS connector). And if you buy one with TWO SAS expanders installed, be aware that one is primary and one is secondary; only the primary one is useful for this project.

I have to admit, it seems almost too incredibly simple, that I can connect two backplanes to one HBA with just one cable each.*
You know. Almost to perfect to be true. I'd love, if it's that easy! :cool:

It really should be. This is just a variation on how massive storage servers are built. Each of those SAS expanders and backplanes would be in their own individual enclosure with their own individual power supply, with "nothing" in the corresponding enclosure for a motherboard -- this is actually how Supermicro makes SAS JBOD's. Then you run external SAS cables to the chassis that holds your NAS. It makes the pieces of the storage puzzle into legos-for-servers. If you can imagine the challenges involved in designing a system with hundreds of drives, it becomes desirable that SOMEHOW it can't really be as complicated as wiring each individual ${thing} up separately ... right?

Those look like this when installed in a SAS JBOD chassis:
216BE1C-R741JBOD_large.png

See, it's a full "server" case but there's no mainboard, and the SAS connectors are just brought out to one of the PCIe slots on a backplate.

In your case, we're just skipping all the extra cases and power supplies and external-to-internal cables and all of that. You still do have a bit of a challenge arranging power for this beast, but that should be quite doable even for a "hobbyist" as you don't need to be spinning up physical hard drives -- the relatively low power needs of SSD's makes things SO much easier.

Hope this gives you some really good clues as to why this isn't anywhere near as complicated as you thought in the first post. Delighted to see it if you actually do it.

* - I do have a 'spare' i7-2600S system on the shelf. Would be perfect for setting things up and playing around while waiting for hardware. Should be possible, if all it requires for barebones experimenting/testing is a single PCIe for the HBA.

There ya go.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Anything with a PCIe slot can go for experimenting, or for the final build if the Plexiglas display is only for the drives. If the motherboard becomes part of the display, then I'd go for a X10SDV for ease of cooling.
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
See, it's a full "server" case but there's no mainboard, and the SAS connectors are just brought out to one of the PCIe slots on a backplate.

It's all getting a lot more clear to me now. Thanks :smile:

If the motherboard becomes part of the display, then I'd go for a X10SDV for ease of cooling.

I know this can come along as nitpicking, but isn't that a rather expensive solution or are the reasoning, that I get motherboard, CPU and 10gig all at once?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's all getting a lot more clear to me now. Thanks :smile:

Not a problem. Your enthusiasm is inspiring. I've always got a few minutes for that.

I know this can come along as nitpicking, but isn't that a rather expensive solution or are the reasoning, that I get motherboard, CPU and 10gig all at once?

It *is* an expensive solution, but it has the advantage -- potentially -- of being an all-in-one solution, since some versions come with the 10G and the HBA all built-in, which means significantly simplified hardware. If we were to think about mounting all this on a wall, let's say on a nice bit of wood, or custom-ordered sheet metal, or whatever, the mounting issues of 24x SSD aren't too terrible because they can get slotted into a backplane like slices of toast in a toaster, vertically, with some right angle brackets to hold the backplanes. Where it gets messier would be if you wanted to put a mainboard up there "behind the plexiglass" too, because PCIe slots are not optimized for non-chassis use. If you mount the mainboard to the wood with some standoffs, then you have to worry about PCIe expansion cards drooping or sagging, because neither HBA's nor ethernet cards are lightweight affairs. And then you also need to worry about making sure they have some airflow, because both kinds of cards tend to dissipate about 10W. Each.

Having a "flat" mainboard would mean that you could install some 40mm quiet fans at the bottom of the wall display to bring air up and over everything, keeping both the mainboard and SSD's cooled. In general, the all-in-one boards need somewhat less cooling and are more power efficient. But yes they are EXPENSIVE.

You can absolutely do this either way.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I know this can come along as nitpicking, but isn't that a rather expensive solution or are the reasoning, that I get motherboard, CPU and 10gig all at once?
If you buy a new board, with all bells-and-whistles, yes.
If you find a second-hand opportunity, such as this French guy who offers a X10SDV-4C-TLN2F for 200E (with no import duty to Denmark, contrary to Supermicro backplanes from the USA or the UK)
then it's not bad for a CPU, 10 GbE and a PCIe slot… I'd replace the fan by one which does not make an audible whine in any case, and then there's the issue of securing the HBA card if the motherboard is wall-mounted. With a PCIe extension cable, the HBA could actually lay flat alongside the motherboard and become part of the display.
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Update:
Just ordered 2 backplanes and a X10SDV-4C-TLN2F.

Still debating on whether or not I should go with 128 GB of memory or 64 GB will suffice.
Will any HBA (in IT-mode) do or should I aim for a LSI 2308 HBA as earlier mentioned*?

I have six cables (ATD7909038EU, "mini-SAS") on the shelf - from an earlier project - that I'm not entirely sure will work or not.

*Addendum - I have a LSI 9211-8i laying around. I think it's PCIe 2.0 x8, so I don't know if it's worth it to try to flash it to IT-mode?
 
Last edited:

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Update #2: Add to the above:

1x LSI SAS 9340-8i (M1215) HBA - IT mode [preflashed, I assume]
4x 32GB PC4-19200 ECC / DDR4-2400MHz Memory (MTA36ASF4G72PZ-2G3 Micron, CL17)
4x 36pin Mini SAS HD SFF-8643 to 36pin SAS SFF-8087 (better have some spares, just in case. Hope it's the right type)
1x PCI-E Express 8X Riser Card Extender Flexible Cord Ribbon Cable (I have something in mind)
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The 3008 controller and 128 GB RAM look like overkill where a 2308 and 32-64 GB would have done, but it seems that the project is now unstoppable. Practicality and economic rationality were out from the beginning anyway.
There are many cheap extender cables on eBay from China. The quality (or even luxury) version from Taiwan would be an ADT-Link R83SR/SL/SF (or a R88), depending how you mount it.

We want pictures! :cool:
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
o_O STORAGE PORN! WOOT! o_O
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
There aren't a lot to take pictures of. Perhaps the naked wall or the harem of mature but still-up-for-it SSDs? :tongue:
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Another update: Following items is being shipped.
  • Riser cable
  • X10SDV-4C-TLN2F
  • Backplanes
  • SAS-cables
  • HBA
  • RAM
Soo... yeah. I already received notifications from FedEX. Fingers crossed :cool:
 
Top