BUILD Thoughts on build for performance storage

Status
Not open for further replies.

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
Hi

I have been playing around with Freenas for a couple of months as small storage smb servers, and haven't had any issues so far.
Now I will try to make a serious build. I have planned a 3-4 month testing period.

It is going to be connected with either 2 x 10GBase-T or QSFP+ with 4 x SFP+ breakout cable.

Chassis:
Supermicro CSE-216BE1C-R920LPB - with Redundant 920Watt Hot Swap Power Supplies
Motherboard:
Supermicro X10DRi-T
CPU:
2 x E5-2640 v3 Intel 8 Core Xeon 2.60GHz 20Mb Cache 90 Watts
RAM:
4 x 32GB Samsung 2133MHz DDR4 ECC Registered LRDIMM Module
System mirrored disks:
2 x Intel 120GB SATA SSD S3500 (system)
2 x Intel 400GB P3700(ZIL)
Internal Storage Controller:
1 x LSI 9300-8i Host Bus Adaptor (Non RAID) - 12Gb/s SAS 3.0
External Storage Controller:
1 x LSI 9207-8e Host Bus Adaptor 8i (Non RAID) - 6Gb/s SAS 2.0
NIC:
1 x Intel Ethernet Converged Network Adapter XL710-QDA2 - Dual Port QSFP+

Direct Attached Storage:
1 x Supermicro SC847E16-RJBOD1 attached to the LSI 9207-8e described above.
45 x 4 TB Seagate Enterprise NAS HDD

I don't use the 24 x 2.5" bays connected to LSI 9300-8i Host Bus Adaptor, until I can afford the prober SSDs, thought of using Crucial M550.

Q1.
What do you think? Could this work?

I have posted this build with another title but didn't get any replies, so now I'm trying with a new title;-)
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I am not the "SuperMicro" guy; but from my view things look pretty good. Would recommend seeing what others say.

Only things I would question are:
  1. The Seagate Hard Drives.
    • Not the most recommended drive manufacturer around these parts
    • WD and HGST are usually the preferred ones
  2. 45 x 4 TB
    • Any particular reason to go with such a high count of lower capacity drives?
    • If you went with something like 8TB drives you could probably get it all housed in one unit; of course case would then be changed for 3.5" drives
    • Would simplify things a lot
 

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
  1. The Seagate Hard Drives
    Noted
  2. Any particular reason to go with such a high count of lower capacity drives?
    This is what I already got, which means I don't have to go purchase any additional drives;-)
  3. I am making the internal HBA ready for SSDs for some seriously fast storage, but don't have all the drives at the moment, but yes, it would simplify things.
Thank you for replying and giving your thoughts.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
This is what I already got, which means I don't have to go purchase any additional drives;-)
Ahh, well if you already have them then no use in wasting them; even if they are Seagates.. ;)

Would one also assume that you have all the other parts as well or are there any that you are currently just considering?

Seems to me like you have a pretty sound configuration. Just be sure you test/burn-in those drives and plan on a design that incorporates redundancy. :)

Only other advice I can think of is that once you do get the SSDs; I personally would have two pools (or even two entirely different systems). One for the SSDs and one for the external drives; that way you help negate the possibility of losing a single pool that consists of drives internally and externally should one of the boxes/controllers go down.

Of course; I highly doubt you would mix SSDs and SATA drives in your design anyways...
 

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
Would one also assume that you have all the other parts as well or are there any that you are currently just considering?
Some are ordered and some are already in stock. I am still considering whether or not I need L2ARC.

Just be sure you test/burn-in those drives and plan on a design that incorporates redundancy.
The plan is Raidz2 for the 45 external drives in groups of 4 with 1 hotspare.

Only other advice I can think of is that once you do get the SSDs; I personally would have two pools (or even two entirely different systems). One for the SSDs and one for the external drives; that way you help negate the possibility of losing a single pool that consists of drives internally and externally should one of the boxes/controllers go down.
This is exactly what I had in mind, and no I am not going to mix ssd and sata drives;-)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
The plan is Raidz2 for the 45 external drives in groups of 4 with 1 hotspare.
Unless there has been some serious improvements with the hotspare working as everyone would like them to work, I'd recommend you re-evaluate your pool design. A hot spare will not automatically join the pool when a failure is detected. The only benefit I can think of is you do not have to physically install the device and can remotely replace the failed drive with a new drive. If you are good with that, guess all is okay for you.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
The plan is Raidz2 for the 45 external drives in groups of 4 with 1 hotspare.
Just noting that this design is not really going to get you too much space. While it is very redundant; you are basically giving up two drives to each vdev. Since you are using vdevs of 4 drives; that is half allocated to redundancy.

Roughly speaking (based on numbers from @Bidule0hm 's "ZFS RAID size and reliability calculator") that help explain it better:

Your current design (what I believe it to be)
  • 11 RaidZ2 vDevs of 4 x 4TB drives and drive as a hot-spare for the pool
  • All 45 disks allocated/used
  • Roughly 6.3 TB of Usable Space x 11 vDevs = ~69.3 TB Usable Space
My suggestion
  • 5 RaidZ2 vDevs of 8 x 4TB drives and no hot-spare
  • All 45 disks allocated/used
  • Roughly 18.9 TB of Usable Space x 5 vDevs = ~94.5 TB Usable Space
* I did this kinda quickly, so please feel free to ensure my calculations are correct...
 

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
I was going for a fast and safe configuration, easy to upgrade. By your suggestion are taken in to consideration especially if the hotspare solution doesn't work.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I was going for a fast and safe configuration, easy to upgrade. By your suggestion are taken in to consideration especially if the hotspare solution doesn't work.
Understood, but chances are with the design you were thinking about you will have to upgrade the drives a lot sooner. The 5 vdevs design I mentioned would give you ~36% more space from the start while still providing RaidZ2 redundancy.

As far as the hot-spare; if the system is "off-site" and not easy to get to then it may make sense. Myself, I consider hot-spares a waste (since it is not being used). Unless there is compelling reasons to have it I would say forget about it.

If there adequate reason to have it AND there are known issues with it actually working as designed (not sure cuz I don't use them), then simply have the drive still in the system but un-allocated. Worst case, you can always remotely connect and use the GUI to manually add/replace a drive. Of course, this does mean you will have proper e-mail notifications and maintenance tasks (SMART Long, Short, Scrubs, etc) scheduled. ;)

BTW, here is another other pool design for consideration:
  • 4 RaidZ3 vDevs of 11 x 4TB and drive as a hot-spare or simply there for manual configuration
  • All 45 disks allocated/used
  • Each vDev now has 3 drive failure redundancy
  • Roughly 25 TB of Usable Space x 4 vDevs = ~100 TB Usable Space
Ultimately the "best design" is entirely up to you and your comfort zone.
Play around with the calculator and see what works for you (it is kind of addictive.. ;) ).
 

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
4 RaidZ3 vDevs of 11 x 4TB and drive as a hot-spare or simply there for manual configuration
The system is on-site, so hotspare isn't a necessity. I'll take good look at the calculator when I get the build up and running and need to do my tests;-)

What about L2ARc do you see any reason for that?

The system is going to be used for raw video files and image sequences, and clients will be connecting from Mac OS X 10.9.5-10.10.5 and Windows 7-10.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
What about L2ARc do you see any reason for that?
Since you have a large amount of RAM already defined (128GB) then I'd implement your build as-is and see if you need to add more RAM which is the first thing you should do before adding an L2Arc. You will get more performance in that respect. And since you are looking for performance, this will affect the way you create your storage pool as well.

What exactly is your target for storage capacity since that wasn't really addressed? Did you want to have separate pools or one large pool with lots of redundancy? And I can already guess that speed of data is a priority but will the expectation be that the video data will be manipulated directly on the NAS or be copied to a local machine and then edited?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
You should do mirrors if that's the case. You'll have the same usable space as the Z2 but the mirrors pool will be significantly faster.
That was my thought too but knowing the actual usage would be helpful and how much space is required. If the OP needs 50TB of storage then of course a different structure will need to be designed. We should not assume what is enough storage even given what the OP thinks he is doing is correct.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
While it is very redundant; you are basically giving up two drives to each vdev. Since you are using vdevs of 4 drives; that is half allocated to redundancy.
I read this as the opposite. Meaning 4 vdevs of 11 drive RAIDZ2.

OP: What is your expected workload?

Do you already own the Intel NIC? If not the Chelsio tends to perform better.

And with only 4 vdevs, there is no way you are going to be able to saturate 4 LAGG'd 10GbE links. Because that would mean at least 4 different client connections (one per link) all trying to do sequential IO to your pool. And forget about random IO. Hence the workload question.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I read this as the opposite. Meaning 4 vdevs of 11 drive RAIDZ2.
You may be correct in this. Agree with everyone else on what the use-case would be to assist us in providing better layout suggestions.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
Even though I'm not an expert on pool design, at least I'm good for something, some times, like not assuming the OP really posted true intentions and workload and when I see video editing comments, I perk up as that is typically a very demanding work load. Just don't ask my wife if I'm good for anything, you might not get a favorable response unless I just gave her an airline ticket to go see one of our sons, then I'm in good standing.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
Since you have a large amount of RAM already defined (128GB) then I'd implement your build as-is and see if you need to add more RAM which is the first thing you should do before adding an L2Arc. You will get more performance in that respect. And since you are looking for performance, this will affect the way you create your storage pool as well.

Makes sense, I'll do that.

What exactly is your target for storage capacity since that wasn't really addressed?

I would like to test what is possible, and then worry about storage capacity afterwards, but I'm aiming for 50-75 TB on the 45 sata disks.

Did you want to have separate pools or one large pool with lots of redundancy?

When I can afford it I would like 2 pools:
1. pool = 45 x sata disk(maybe more later)
2. pool = up to 22 ssd disks(striped)

And I can already guess that speed of data is a priority but will the expectation be that the video data will be manipulated directly on the NAS or be copied to a local machine and then edited?

The material will be manipulated on the NAS directly.

That was my thought too but knowing the actual usage would be helpful and how much space is required. If the OP needs 50TB of storage then of course a different structure will need to be designed. We should not assume what is enough storage even given what the OP thinks he is doing is correct.

Capacity is not important at the moment, I am more concerned with speed. I plan on having a 45 sata disk pool and when I can afford it a SSD pool(striped) for extended performance.

What is your expected workload?

Ideally I would like to be able to run 2 x 14.4Gbit/s through NIC + 2-3 x 1 Gbit/s through the onboard 10GBase-T interface.
The 2 streams from the could be direct attached cables or through a mellonox sx1012 40 GBe switch.

Do you already own the Intel NIC?

Yes I own it already.

And with only 4 vdevs, there is no way you are going to be able to saturate 4 LAGG'd 10GbE links. Because that would mean at least 4 different client connections (one per link) all trying to do sequential IO to your pool. And forget about random IO. Hence the workload question

Nice to know. I thought I might be able to do it with direct attached cables from freenas to 2 clients. I think I'll try striping the 45 disk at first do some tests and then try the RAIDz2 afterwards. Just to see what is possible:)

Even though I'm not an expert on pool design, at least I'm good for something, some times, like not assuming the OP really posted true intentions and workload and when I see video editing comments, I perk up as that is typically a very demanding work load. Just don't ask my wife if I'm good for anything, you might not get a favorable response unless I just gave her an airline ticket to go see one of our sons, then I'm in good standing.

You are right. The intensions is not to bring this setup into production right away. Since I am a rather new user of freenas this is my chance to see what is possible and what is not with this setup.
Ideally we would like to use the server for postproduction in the Film/TV industri, but since there is a lot of different use-cases in my line of business I thought I would see how far I can push the setup in your opinions.
Our Visual FX department are making exports for Color grading in OpenEXR which could be 75 MB pr. file, there is 24 files pr. second making the workload pretty high for the Colorgrading artist(1800 MB/s).
We could put everything on a local connected RAID but it would take a lot of time copying back and forth the material. Another reason is that more than one employee needs access to the material at the same time.

Thank you very much for sharing your thoughts.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I think I'll try striping the 45 disk at first do some tests and then try the RAIDz2 afterwards. Just to see what is possible:)
If your data is anything other then ephemeral you do not want to run a stripe pool. If you are testing different pool designs then you should also test mirrors. 22 mirror vdevs with blow the doors off the Z2 pool for the workload you've indicated.
 

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
For the test the data is backup elsewhere. I'll a post later with test results.

If there is anything else anyone sees as potential issues please let me know[emoji2]


Sent from my iPhone using Tapatalk
 
Status
Not open for further replies.
Top