8 storage chassis reuse project

Status
Not open for further replies.

koendb

Cadet
Joined
Jan 15, 2016
Messages
7
Hi all,

New here, and looking to implement FreeNAS in our company.

I have a nice project to re-use 8 Storage enclosures, each with 16 disk slots.
These machines came from a rendering and storage setup.
They had 10GBASE-SX4 network cards connected to two HP 6400CL switches.
We ripped out everything, except for the power supply, SATA backplane and fans.

I have built one of these as a test with 1 Xeon E5-2620V3 Processor on a ASUS Z10PE-D16 MB.
The drives used are 4x WD RED 6TB drives.

The motherboard has 2 CPU sockets and can hold up to 1TB DDR4 RAM.
It also has 10 x SATA3 6Gb/s ports and 1 M.2 connector.
Using the M.2 connector, you do lose 1 SATA port though.

The plan is to start with this setup:

  • 1 x CPU with 32GB ECC RAM
  • 4x 6TB drives, connected to the SATA ports , using SSF 8470 to SATA breakout cable
  • Re-using the 10GBASE-SX4 cards and HP switches to provide 10GB between the storage and our backup-server.
  • m.2 32GB sata ssd as system drive (or as Log device , using a usb stick as startup/system disk )
  • 2x 4 PCI sata ports extention cards

We could link the HP switches to a CX4 to SFP+ media converter to connect the 10GB fibre from the media converter to a 10GB connection on one of our lan switches.
I had the idea to let specific workstations connect to the storage via the 10Gbe network, while normal office desktops would connect to the gigabit interfaces of the storage machines.
I hope this way, we can provide more bandwidth to multiple clients simultaneously.

This setup looks very flexible to me, as we can increase memory, cpu, disks,.. as we go

Does this look like a good setup to start with?
Reading up on raid/vdevs layout, i do realise i might need more drives initially to allow for an optimal raid/vdev configuration and the ability to extend the disk capacity afterwards.

I am still trying to figure out what the best raidZ/vdevs configuration would be to have a balanced setup ( reliability/performance).
But maybe I need to ask this question in a specific topic :)
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What you've added is kind of a poor design.

NAS typically doesn't benefit much from dual CPU setups; there are limited exceptions to that statement.

You've got a minimal amount of memory for a business system.

The M.2 is a pointless waste. It's an expensive choice for boot. You can't meaningfully use it as a log device because a SLOG needs specific characteristics and you can't just pick something random.

Don't use "PCI sata ports extention" cards. Use an HBA.

Let me suggest an alternative build for you.

Supermicro X10SRL - $240
Intel Xeon E5-1650v3 - $550
2 x M393A4K40BB0‑CPB - $400
2 x LSI 9211-8i HBA - $400
4 x SFF8470 to SFF8087 - $50
2 x Supermicro SSD-DM032-PHI - $100

Reuse your existing chassis and network. Less than $2000 per machine to retrofit into a first class NAS.
 

koendb

Cadet
Joined
Jan 15, 2016
Messages
7
Ah thanks jgreco,
I find your alternative build interesting as it is in the same price range as the setup I have now.
So I will definitely redo my homework based on your comments.
I just want to know some of your reasonings in order to learn from it as I am no storage specialist, nor hardware for that matter :)

While I know that dual CPU does not add much for a storage, it was just the board we had laying around for our testmachine.
and it added the possibility to re-use one of these machines as an ESXi test environment instead of storage.
But I can just keep that specific board for the ESXi setup and do the rest with an alternative, which is a lot cheaper btw.
Any reason why to pick the Super micro board instead of the Asus one, besides the price?

The amount of ram is the bare minimum, I realise, but initially only meant for 24 TB of storage
However, I can easily add more ram from the start.

OK so no M.2. thanks for pointing that out, I wasn t aware.

What is the reasoning about using HBA's instead of using the onboard sata ports and an extra PCIe card for extra SATA ports?
We have a couple of HPE H240 HBA's I can use too, any advice against them?
( EDIT: ah nevermind, they dont allow JBOD I think )
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I just want to know some of your reasonings in order to learn from it as I am no storage specialist, nor hardware for that matter :)

Which is why I sit here and grinch.

While I know that dual CPU does not add much for a storage, it was just the board we had laying around for our testmachine.

Happens.

and it added the possibility to re-use one of these machines as an ESXi test environment instead of storage.
But I can just keep that specific board for the ESXi setup and do the rest with an alternative, which is a lot cheaper btw.
Any reason why to pick the Super micro board instead of the Asus one, besides the price?

ASUS specializes in midrange to high end consumer grade boards. Desktops, gamers, workstations. There's no doubt that they make good stuff and (at least) hundreds of their boards have been through the shop here over the years. They've branched out and dabbled in areas such as servers. There's no specific reason to think that their offerings are awful, and in fact they did make a nice Xeon E3 board in a 1U server that was reported to be a good small FreeNAS box. Their web page for that board reads like they're trying to sell it to some gaming enthusiast; what I want are the damn specs in a format where I can see in a moment the things I need to know. Actually you can often get what you need out of the Supermicro part number. There is probably nothing wrong with the ASUS board, but there's also nothing to write home about, IMO. It's a tepid entry that looks like they're not sure if it's a server or a workstation.

We tend to favor the Supermicro stuff around here. Supermicro is a company that is extremely specialized and focuses nearly entirely on server and infrastructure gear. Their focus on that means that they have a much better idea of what the data center world is calling for. You want a small NAS? Get an X10SLM-F. A small NAS with more net? X10SLM+-LN4F. A NAS with lots of disks? X10SL7-F. More memory? X10SRL-F. etc. There's a product that closely hits most needs. It's made to be a server. Their support doesn't have their heads stuck up their butt when you contact them about a problem with your "server" board. They don't freak when you say you're running FreeBSD.

There's nothing particularly magic about it other than someone designed it from the ground up for server-type applications. And it's heavily used. I walk down the aisle at Equinix, I can't help but see lots of Supermicro gear. It's prevalent. It's right up there with HP and Dell. ASUS? I'm pretty sure I've seen some, but it's few and far inbetween.

The amount of ram is the bare minimum, I realise, but initially only meant for 24 TB of storage
However, I can easily add more ram from the start.

The main reason to start at 64 is so that you buy two 32GB DIMM's, which is a good configuration. It may or may not be necessary for your application. I was just rolling with it.

OK so no M.2. thanks for pointing that out, I wasn t aware.

What is the reasoning about using HBA's instead of using the onboard sata ports and an extra PCIe card for extra SATA ports?

Because the success rate of using random crappy "add-on" SATA ports is not that high, and we like to suggest things that are Going To Work. Also the cabling is so much better the way I specced it.
 

koendb

Cadet
Joined
Jan 15, 2016
Messages
7
First of all thank you for taking the time to answer my questions.

I am going to go with your recommendations and order these.
 

koendb

Cadet
Joined
Jan 15, 2016
Messages
7
In the meanwhile, I received all the bits and pieces and built the first machine with the exact components jgreco proposed.
Got it up and running...
I had some issues with the SFF8470 connectors, though. I was unable to source SFF8470 latch type connectors to SFF8087.
So We bought screw type ones and after cutting away some of the rubber inside, I was able to replace the connector chassis with the ones I ripped from the original cables.
Bit of a DIY, but it works just fine.

If someone can give me a link to the correct cables, please let me know.

So now for the testing...
I've installed FreeNAS 9.3.1 on the first SSD and after install added the second SSD to the system pool.

For now, I've kept the old disks in, I think they are at least 5 years old.
They where not fired up anymore since 3 years, so some disks will probably fail in the next few weeks.
But that s the point, I hope they do.
But before that, I will run the memtests during a week or so.

I've got some RAID config questions though,
I was thinking to either go for 4 disk RAIDZ2 vdevs or just 2 disks in mirror per vdev.
This would give me the same protection and same disk capacity, but with the mirrored disks would be beneficial for performance and easier to extend.
I am starting out with only 4 disks, so that would be 1 vdev in RAIDZ2 or 2 mirrored disks vdevs.

I guess I also should spread my disks over both HBA's, if 1 should fail, I still have half of my disks.

Are these assumptions correct?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I was thinking to either go for 4 disk RAIDZ2 vdevs or just 2 disks in mirror per vdev.
This would give me the same protection and same disk capacity
Not correct. For a pool to remain healthy, all constituent vdevs must remain healthy. With 2 mirrors you can only lose 1 disk from each, but with RAIDZ2 you can lose any 2 disks.
mirrored disks would be beneficial for performance and easier to extend.
Correct.
I guess I also should spread my disks over both HBA's, if 1 should fail, I still have half of my disks.
True, if you split them correctly (depends on your vdev layout).
 
Status
Not open for further replies.
Top