Seeking advice for 10gbe build, before I newegg it

Status
Not open for further replies.

JRI001

Dabbler
Joined
Mar 12, 2015
Messages
13
Here is a part list that I am thinking about (taking some ques from cyberjock). Can I potentially get DAS thunderbolt type speeds to my workstation from this build ?

-Fractal Design Define R4 Blackout Silent ATX Mid Tower Computer Case $100
-SUPERMICRO MBD-X9SCL-B Micro ATX Intel Motherboard LGA 1155 Intel $149
-SeaSonic SS-400ET Bronze 400W 80 PLUS BRONZE Certified Active PFC Power $47 (maybe 500watt better ? )
-Xeon E3-1230 V2 $229
-Kingston 32GB (4 x 8GB) 240-Pin DDR3 SDRAM ECC Unbuffered KVR16E11K4/32I $329
-(6) WD Red 6TB $1679.94
-Intel Dual Port X540-BT2 10GbE $468

Build Total $ $2,967.69

Heres some background on what Im trying to build:

I am looking for a fast NAS solution. I will almost be using it as a DAS, but would like to see if I can build something that will give me the added benefits of something across the network. This would be for my home office. I do 3D and compositing (dealing with 32bit multipass exr files) some editing, Davinci color. so speed is a major factor. I have probably 10TB of data from a mixture of old projects, Video footage, etc. Potentially in the future I could get into 4k video. so i think 24TB of usable would be a good place to start. I also really like having a central server for my HTPC, MacBook backups, itunes sharing, etc...

I've also considered something like a promise pegusus2 r8 thunderbolt DAS, or a QNAP device with 10gbe. If someone thinks either of these would possibly be a better option please let me know. I like the idea of saving some money and building something with nice hardware that would cost a premium for prebuilt stuff. I've built lots of computers before, had a hackintosh render farm, and done freeNAS 7 with UFS, so building something doesn't scare me to much. However I don't want to drop a ton of cash and then realize that i wont get the kinda of performance i am hoping for because i did something stupid or overlooked something. I like learning and have some time to invest towards the freeNAS learning curve, but I don't have a huge background with IT or networking etc. I can do some trouble shooting, but essentially i need to get something setup that will offer a reliable, stable and fast solution. after i set it up freeNAS 7 ran headache free 2 years for me, till i sold everything. From the last week of research I am thinking that this proposed build could do the trick, but I would like to see if those wiser than myself could way-in, to see if I'm on the right track, or missing an obvious gotcha.


My current workstation is a 2013 macpro garbage can ( considering going Windows when i save up some more). So I will be using a promise sanlink 2 thunderbolt to 10gbe. is it posible to run the x540 bT2 straight into a cat 7 , and the Sanlink2 (i believe you can on windows 8.1 or windows server 2012) or do I need to get a 10gbe switch along with the rest of the truck load of cash this will be causing me to offload :)

Thanks in advance for any advice or stern reprimands.
 
Last edited:

Oko

Contributor
Joined
Nov 30, 2013
Messages
132
As you know your network is only as fast as its slowest part. I am guessing all machines on your network have 10 Gigabit network cards, you have 10 Gigabit switch and you are using Cat 6 cables or better.
 

JRI001

Dabbler
Joined
Mar 12, 2015
Messages
13
As you know your network is only as fast as its slowest part. I am guessing all machines on your network have 10 Gigabit network cards, you have 10 Gigabit switch and you are using Cat 6 cables or better.

At first the 10gbe would only be utilized by a Mac using a Sanlink2. So granted it feels a little silly. I now most people would probably just go for a thunderbolt DAS. But I need a server for the rest of the stuff anyways (HTPC on gigbit, and 2 Macbooks over wifi). I also need this fast access to a large amount data (currently 10TB) Over the next few years I could see growing things out more with a additional PC workstation and some render nodes, but really only 1 or 2 workstations would ever utilize the 10gbe speeds for daily use.
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Regarding storage capacity, and assuming you'll be configuring your pool as RAIDZ2, 6 x 6TB is going to yield less than 22TB. With 80% being the recommended maximum utilization, you'll be below 18TB usable. So, you need more disks to get to your goal of 24TB usable.
 

JRI001

Dabbler
Joined
Mar 12, 2015
Messages
13
Regarding storage capacity, and assuming you'll be configuring your pool as RAIDZ2, 6 x 6TB is going to yield less than 22TB. With 80% being the recommended maximum utilization, you'll be below 18TB usable. So, you need more disks to get to your goal of 24TB usable.

Thanks Robert. Yes I was thinking RAIDZ2. Admittedly i haven't learned as much as i need to on setting up the raid config. From what i have gathered so far the optimum utilization for a RAID2Z is 6 disks ? with two redundant disks I was guessing at the 24TB amount, but should have thought about the overhead and %80 rule. what would adding another 2 6TB disks add for capacity and redundancy ? would it increase the speed of the RAID ? this MB has 6 sata ports, so I would need a M1015 or different MB for more than 6

from what I gathered from cyberjocks guide if I set up a 6 disk RAID2Z VDEV, and wont be able to just add more disks to that VDEV. In that case its my understanding the best thing would be to wait until a had another 6 Disks to set up another VDEV and then add that to the Zpool ?

I think I could live with 18TB for a while till i need to add more in the future.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
from what I gathered from cyberjocks guide if I set up a 6 disk RAID2Z VDEV, and wont be able to just add more disks to that VDEV. In that case its my understanding the best thing would be to wait until a had another 6 Disks to set up another VDEV and then add that to the Zpool ?

Yes, it's exactly that ;)
 

JRI001

Dabbler
Joined
Mar 12, 2015
Messages
13
I know there are lots of variables but......Any idea if I could expect say ......500mb/sec read/write from this set up?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
For which setup?

6 disks RAID-Z2?
8 disks RAID-Z2?
2x 6 disks RAID-Z2?

500 MB/s would not be bad at all, especially for the 6 disks RAID-Z2. However you should be able to to do better with the other two setups I think, look at the thread with the dd benchmarks to see what you can expect more precisely ;)
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Supermicro X10SL7-F
Xeon E3-1241 v3
4x8GB DIMMs, preferably CT2KIT102472BD160B

If you want to achieve 500+MB/s, you'd rather want a 10disk raidz2 or 2x6disk raidz2 vdevs. Since that board already has 14 SATA ports, you could go balls out with 3-4TB HDDs - but 6TB would still work out considering the overhead for a performant ZFS system.

For anything bigger/faster you'd need more HDDs, more RAM and more bandwidth in between. If you really want to plan for the future, get the Supermicro 5048R-E1CR36L 36bay barebone (2600$), a Xeon E5-1620 v3 or E5-1650 v3 and 4 or 8x16GB DDR4 DIMMs with that.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The biggest limit to speed will be the network (10GbE "solves" that) and the pool configuration. Unfortunately, with ZFS, RAIDZ speeds limit a vdev's performance to approximately that of a single hard drive, so getting more speed turns into an exercise of having multiple vdevs.

Mirror vdevs are a better choice for speed, but, of course, are more costly per byte than RAIDZ1 or RAIDZ2. You will get better speeds from eight drives configured as four mirror vdevs than you will from eight drives configured as two RAIDZ2 vdevs.

A larger system (CPU/memory) is, of course, also helpful, mostly because of memory. Unfortunately that comes with the cost differential of going from the affordable E3 platform up to the pricey E5 stuff. As marbus90 noted, the sweet CPU's are the E5-1620v3 and E5-1650v3, but the cost of the DDR4 RAM for these is astronomical. The two 1650's we've built here have been limited to 64GB RAM each because even just 64GB cost ~$1000. It is probably more economical to look at the E5-16xxv2's with DDR3, but I haven't priced that out lately.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
From what i have gathered so far the optimum utilization for a RAID2Z is 6 disks ?
For a long time the rule of thumb was that 'power of 2 plus parity' drive count was optimal. That rule of thumb is no longer useful in anything but very specialized applications (databases with fixed record sizes) and even then, when compression is enabled, which it should be, the rule doesn't apply. See this article, or just skip down to the part where he says, "To summarize: Use RAID-Z. Not too wide. Enable compression."
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
but the cost of the DDR4 RAM for these is astronomical.
Prices went down to around DDR3ish levels. Registered DIMMs always were more expensive than unbuffered DIMMs. Crucials can be had for ~185USD which is about-ish the DDR3 range.

To Robert's statement: Use raidz2 or z3. ;)
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Use raidz2 or z3. ;)
Yes, of course. While RAIDZ is a synonym for RAIDZ1 in the zfs command-line, I'm confident the author meant "an appropriate RAIDZ level", and hopefully the reader will think that's obvious too.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Prices went down to around DDR3ish levels. Registered DIMMs always were more expensive than unbuffered DIMMs. Crucials can be had for ~185USD which is about-ish the DDR3 range.

To Robert's statement: Use raidz2 or z3. ;)

Ah, well, looks like DDR3 prices went up. We had been paying about $500 per 64GB and I wasn't too pleased to sign off on the ~$1000 per 64GB for DDR4.

Looks like DDR3 (M393B2G70BH0-CK0) is around $160 whereas we got it for $125 back in 2012.

The DDR4 part we've been using is Crucial CT16G4RFD4213 which seems to be around $175, but ran for almost $250 back at the end of 2014.

I guess the moral of the story is invent a time machine and go back to 2012.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
"RAIDZ speeds limit a vdev's performance to approximately that of a single hard drive," Uh? the IOPS are limited to one drive IOPS but the speed can be far higher that the speed of one drive, no?
 

JRI001

Dabbler
Joined
Mar 12, 2015
Messages
13
Supermicro X10SL7-F
Xeon E3-1241 v3
4x8GB DIMMs, preferably CT2KIT102472BD160B

Thanks for the Hardware recommend. a Super micro 5048R-E1CR36L 36bay barebone (2600$) with the added cost of CPU and Mem and drives is just going way to far beyond my budget. Bottom line is I am a single user at home the is basically trying to build a DAS that can also act as a server.

"RAIDZ speeds limit a vdev's performance to approximately that of a single hard drive," Uh? the IOPS are limited to one drive IOPS but the speed can be far higher that the speed of one drive, no?
This kinda confused me also. from looking at the DD bench marks ( https://forums.freenas.org/index.php?threads/notes-on-performance-benchmarks-and-cache.981/ ) It seems that a VDEV can go beyond single disk speed no ?


You will get better speeds from eight drives configured as four mirror vdevs than you will from eight drives configured as two RAIDZ2 vdevs.

I need to do some more research because currently I am confused what this setup would yield for redundancy and size of the spool. If both disks on a VDEVS go down you loose the pool right ? can the VDEVS themselves be setup in a type of RAID 5 or RAID 6 config, or does Mirroring 4 VDEVS mean that you get 4 times the speed, but only the size of a single VDEV ?

Good thing its the weekend so I can spend sometime looking into stuff.

One other option occurred to me. Maybe I could have a smaller RAID setup from SSD's to act as a type of working Location for active projects, and having another RAID2Z setup for more back-up, archiving projects, etc.....

Thanks all for feedback.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
well, it would have 36 bays instead of 12. Data growth can be quite immense with your upcoming projects, so buying new in case you get over the 12bay, then 24bay limit might not always be an option ;) Also that barebone can be had as a complete server with 3yr NBD warranty from Thinkmate or ixsystems. In case you want to earn money with that, there's in my eyes no way around such a warranty.

For a Rackmount chassis for the X10SL7-F you could wage a look at ebay, I've seen Supermicro 826A for 225$ with redundant PSUs.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
"RAIDZ speeds limit a vdev's performance to approximately that of a single hard drive," Uh? the IOPS are limited to one drive IOPS but the speed can be far higher that the speed of one drive, no?

The fact that you *could* go faster is not a guarantee that you *will*. Just as with compression, the possibility that data *could* be compressed is not a guarantee that it *will*. If it were exclusively being used for large file storage, that'd shift the balance a bit to make RAIDZ much more palatable, because there you do tend to benefit from higher sequential access speeds, but for general random workloads, mirror vdevs are substantially faster.
 
Status
Not open for further replies.
Top