Ideal disk usage scenario - new build

Status
Not open for further replies.

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
Hi All,

I'm looking at building a FreeNAS box and have the following HDDs available to me:

5 x 6Tb Seagate Enterprise SATA drives
8 x 600Gb 15k SAS drives
10 x 2Tb 7.2k SATA drives

I also have available PERC H700 and H200 cards.

I intend on purchasing an Asrock C2750D4I after reading on here and various reviews.

Any suggestions on best disk configs? I will definitely be using the 6Tb drives and would like to create a seperate pool to act as a backup pool for the 6Tb drives. It's not essential I use all of the drives!

This box will primarily act as a storage node for 2 Intel NUCs running ESXi (only have 1 at present).

I'm reading mixed reviews of whether to use the H700 card or not as it can't do JBOD.

Thoughts appreciated, I'm ready to answer any queries.

Regards
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I also have available PERC H700 and H200 cards.
Ditch the H700, it will do you no good. Make sure your flash the H200 to IT Mode (if not done already) with the P20 Firmware

I'm reading mixed reviews of whether to use the H700 card or not as it can't do JBOD.
Can't be done AFAIK. I have several of them and had tried pretty much every way I could find to cross-flash it. If there was a definitive and tested guide, I will be more than happy to try. I don't even mind bricking one it need be.

Any suggestions on best disk configs? I will definitely be using the 6Tb drives and would like to create a seperate pool to act as a backup pool for the 6Tb drives. It's not essential I use all of the drives!

This box will primarily act as a storage node for 2 Intel NUCs running ESXi (only have 1 at present).
I would think that Mirror vDevs would be the best route; maybe along the lines of 3-Way Mirrors (since I am the paranoid type). However, I would gladly defer to those with more experience.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
What are you planning on using your pools for? That will help determine the best configuration for each pool. The C2750 is nice, but decide whether or not right now if you want 32 GB of RAM in it or 64 GB, since 64 GB in it will run you about a grand since only one manufacturer makes 16 GB DDR3 ECC UDIMMs that will work with it and they're about $350 a piece! I just got a Xeon D board to replace my dead C2750, it supports 128 GB and a 32 GB DDR4 ECC RDIMM was only $250. Granted the board cost about $350 more than the C2750.

Sent from my Pixel C using Tapatalk
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Holy guacamole! :eek:

Yea I really wish I would have known that before I got it. Also they're next to impossible to find, I just tried to search for it to show you but couldn't find it after a few minutes haha. Here's the QVL: http://www.asrockrack.com/general/productdetail.asp?Model=C2750D4I#Memory

On a side note: I'd also recommend against getting the Silverstone DS380, the airflow in it sucks and will eventually cook your drives (avg 50-60C), also it's absolutely miserable to work on. On top of that, I think Silverstone's own SFX PSU killed my backplane, then a week later, my motherboard/CPU!
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
What are you planning on using your pools for? That will help determine the best configuration for each pool. The C2750 is nice, but decide whether or not right now if you want 32 GB of RAM in it or 64 GB, since 64 GB in it will run you about a grand since only one manufacturer makes 16 GB DDR3 ECC UDIMMs that will work with it and they're about $350 a piece! I just got a Xeon D board to replace my dead C2750, it supports 128 GB and a 32 GB DDR4 ECC RDIMM was only $250. Granted the board cost about $350 more than the C2750.

Sent from my Pixel C using Tapatalk


Brando,

I intend to use the pools for 1. iSCSI connection shares for ESXi VMs and 2. was thinking of a seperate pool for back ups. I intend on having 2 NUC's running ESXi hypervisor roles so will be able to utilise vMotion etc.

I've had a quick look at the Xeon D-1540 and it seems to be a fair bit more power hungry than the C2750. I'm also thinking 32Gb would more than suit my needs. I'm after a low power solution as it will be left on 24x7.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
Ditch the H700, it will do you no good. Make sure your flash the H200 to IT Mode (if not done already) with the P20 Firmware

Mirfster,

Thanks for your thoughts. I had heard/read previously that H200's were flashable in IT mode and the H700 was a bit of a no go with FreeNAS. I'm just wondering whether it's worth installing the H200 card when I could just put in some of the 2Tb SATA drives and run them off the SATA connectors on the C2750.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Thanks for your thoughts. I had heard/read previously that H200's were flashable in IT mode and the H700 was a bit of a no go with FreeNAS. I'm just wondering whether it's worth installing the H200 card when I could just put in some of the 2Tb SATA drives and run them off the SATA connectors on the C2750.
If you have the available SATA ports then a HBA is not needed. Usually they are used when there is a backplane involved or additional ports are required. As long as your SATA Ports support 6Gb/S (which they should) then you are fine without it.
I intend to use the pools for 1. iSCSI connection shares for ESXi VMs and 2. was thinking of a seperate pool for back ups. I intend on having 2 NUC's running ESXi hypervisor roles so will be able to utilise vMotion etc.

I've had a quick look at the Xeon D-1540 and it seems to be a fair bit more power hungry than the C2750. I'm also thinking 32Gb would more than suit my needs. I'm after a low power solution as it will be left on 24x7.
Might want to look this thread over first when considering iSCSI: "Why iSCSI often requires more resources for the same result"

Excerpt:
What people want -> old repurposed 486 with 32MB RAM and a dozen cheap SATA disks in RAIDZ2

What people need -> E5-1637v3 with 128GB RAM and a dozen decent SATA disks, mirrored
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

I should point out that that's more along the way of resetting expectations than being totally serious. A nice E3 Skylake system with 64GB of RAM (a relatively recent option) is actually a fairly hefty system for a home lab, and I'd *bet* that a C2750 with 32GB would be fine if you don't have a crushing load of VM storage, which I can't imagine you would on a few NUC's.

I've had a quick look at the Xeon D-1540 and it seems to be a fair bit more power hungry than the C2750. I'm also thinking 32Gb would more than suit my needs. I'm after a low power solution as it will be left on 24x7.

I wonder how you came to that conclusion. Are you mistaking the TDP for power-guaranteed-to-be-used-at-all-times? Because it doesn't work like that at all, and in general, you should want work that needs to be done to get done as quickly as possible in order to get back to idle.

If you have CPU 1, which has a 20 watt TDP, and CPU 2, which has a 45 watt TDP, and we *assume* that each CPU eats the full TDP when at full tilt...

If CPU 1 is only 1/3rd the speed of CPU 2, and you give CPU 1 three hours of work to do, you will use 60 watt-hours of power. Giving that same task to CPU 2 will complete in 1 hour, using 45 watt-hours of power plus whatever is consumed to idle for two more hours.

But the point is that the faster part uses less power to get the work done, plus there's no guarantee that idle power won't be similar to the slower part.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I was running iSCSI on my C2750 and was getting decent speeds, but I was bottlenecked by my 1Gb NIC.

Brando,

I intend to use the pools for 1. iSCSI connection shares for ESXi VMs and 2. was thinking of a seperate pool for back ups. I intend on having 2 NUC's running ESXi hypervisor roles so will be able to utilise vMotion etc.

I've had a quick look at the Xeon D-1540 and it seems to be a fair bit more power hungry than the C2750. I'm also thinking 32Gb would more than suit my needs. I'm after a low power solution as it will be left on 24x7.

So you'll probably want a pool of mirrored VDEVs for iSCSI storage and probably RAIDZ2 for the backups. As for power usage, my electric bill has just about stayed the same between the C2750 and the Xeon board.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
I should point out that that's more along the way of resetting expectations than being totally serious. A nice E3 Skylake system with 64GB of RAM (a relatively recent option) is actually a fairly hefty system for a home lab, and I'd *bet* that a C2750 with 32GB would be fine if you don't have a crushing load of VM storage, which I can't imagine you would on a few NUC's.



I wonder how you came to that conclusion. Are you mistaking the TDP for power-guaranteed-to-be-used-at-all-times? Because it doesn't work like that at all, and in general, you should want work that needs to be done to get done as quickly as possible in order to get back to idle.

If you have CPU 1, which has a 20 watt TDP, and CPU 2, which has a 45 watt TDP, and we *assume* that each CPU eats the full TDP when at full tilt...

If CPU 1 is only 1/3rd the speed of CPU 2, and you give CPU 1 three hours of work to do, you will use 60 watt-hours of power. Giving that same task to CPU 2 will complete in 1 hour, using 45 watt-hours of power plus whatever is consumed to idle for two more hours.

But the point is that the faster part uses less power to get the work done, plus there's no guarantee that idle power won't be similar to the slower part.

jgreco,

I read a few reviews which compared power usage (using a standard power meter plug) and the first review said it was around 55w idle for the Xeon D as opposed to around 30w idle for the C2750. I'd not gone as far as to break it down into when CPU's are at full load etc as to be fair my systems will be idle for 90-95% of the time so idle power measurement is what's important to me. The Xeon D-1540 looks a great bit of kit but I can't really justify the extra expense, plus I also don't seem to be able to find any for sale in the UK where I am based!! I'll be going for the C2750 which will more than handle what I need, this is only for a home setup after all!

Thank you for your input, all of this is helping me greatly.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
I was running iSCSI on my C2750 and was getting decent speeds, but I was bottlenecked by my 1Gb NIC.



So you'll probably want a pool of mirrored VDEVs for iSCSI storage and probably RAIDZ2 for the backups. As for power usage, my electric bill has just about stayed the same between the C2750 and the Xeon board.

brando,

I've been reading about the mirrored VDEV's - they look and sound great. Was hoping to use the 5x6Tb's for the main storage array but not sure how to best utilise them in mirrored format - is it possible to have a "hot spare" assigned to the pool? If so I could have a 2 or 4 way mirror and a hot spare..

I'll most likely go for the 2Tb's to act as the backup pool in a Z2 format as you suggest.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Did you really mean to say a 4 way mirror? That would mean that of the 4 disks, 3 would be for parity.


Sent from my iPhone using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
jgreco,

I read a few reviews which compared power usage (using a standard power meter plug) and the first review said it was around 55w idle for the Xeon D as opposed to around 30w idle for the C2750. I'd not gone as far as to break it down into when CPU's are at full load etc as to be fair my systems will be idle for 90-95% of the time so idle power measurement is what's important to me. The Xeon D-1540 looks a great bit of kit but I can't really justify the extra expense, plus I also don't seem to be able to find any for sale in the UK where I am based!! I'll be going for the C2750 which will more than handle what I need, this is only for a home setup after all!

Thank you for your input, all of this is helping me greatly.

That seems way off. No idea how a CPU with a 45W TDP and a known power delta (idle vs loaded) of about 40 watts would end up like that. The power delta implies that the idle power of the CPU alone is around 5 watts, and a system board that consumes an additional 50 watts idle must have some Lego guys doing arc welding in the back room or something.

I mean, I can easily dream up ways to make that happen to a complete system, add lots of stuff to it. For example, I've got an X10SDV-7TP4F on order here. Since it has an LSI2116 on it, I can see it eating some additional watts. The LSI 9201-8i and its 16W TDP is probably the best guess-data-source for that. But I don't really know. If you look at the board, there's not much else on it. If we assume 5 watts for the BMC, then some memory slots imply a few watts for memory, some SFP+ cages imply two more watts for SFP's, the passive heatsink implies some forced air cooling will be mandatory, and two PCIe slots. But without filling the PCIe slots, it seems like it'd be really rough to get up to 55 watts idle, AND I had to include a board with an HBA on it to get that high.

Maybe I'll be able to produce some apples-and-apples numbers later since I think we still have a C2750 hanging around here somewhere too.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
Did you really mean to say a 4 way mirror? That would mean that of the 4 disks, 3 would be for parity.


Sent from my iPhone using Tapatalk
gpsguy,

You're quite right, I meant a 2 way mirror with the possibility of adding a hot-spare.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
That seems way off. No idea how a CPU with a 45W TDP and a known power delta (idle vs loaded) of about 40 watts would end up like that. The power delta implies that the idle power of the CPU alone is around 5 watts, and a system board that consumes an additional 50 watts idle must have some Lego guys doing arc welding in the back room or something.

I mean, I can easily dream up ways to make that happen to a complete system, add lots of stuff to it. For example, I've got an X10SDV-7TP4F on order here. Since it has an LSI2116 on it, I can see it eating some additional watts. The LSI 9201-8i and its 16W TDP is probably the best guess-data-source for that. But I don't really know. If you look at the board, there's not much else on it. If we assume 5 watts for the BMC, then some memory slots imply a few watts for memory, some SFP+ cages imply two more watts for SFP's, the passive heatsink implies some forced air cooling will be mandatory, and two PCIe slots. But without filling the PCIe slots, it seems like it'd be really rough to get up to 55 watts idle, AND I had to include a board with an HBA on it to get that high.

Maybe I'll be able to produce some apples-and-apples numbers later since I think we still have a C2750 hanging around here somewhere too.

If you could provide some real world data that would be great!

One example of power consumption figures for the D-1540 and the C2750 are here: http://www.anandtech.com/show/9185/intel-xeon-d-review-performance-per-watt-server-soc-champion/16

Unlike yourself, I don't at present have access to the actual items so can't perform my own tests/measurements - I therefore need to rely on what I can find on the web.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, but if you look at what it says on that page, it uses the word "entangled" to indicate that the previous results bear no relationship to each other; they're presented in order to establish the power delta, and they come up with 41, which is really close to the ~40 I've heard in the past and the 45W TDP of the package.

See, if I have a platform where there's an arc welder that takes 2000 watts on it, and I want to see how much CPU idle vs fully loaded is used, I take measurements at idle and get 2105 watts, and then I run whatever your fave CPU stressing test is, and measure 2145 watts. I subtract and see a 40 watt delta. This doesn't mean that the CPU takes 2105 watts at idle or that other systems would be expected to consume that much current, it just means that *that particular* platform happens to. The 21xx numbers are useless. It's the delta that's useful information, and from which we can infer additional information.

And the delta and inferred numbers are numbers that you can indeed compare.

I can't really look at the rest of that article right now because it took over a minute for that to come up as it was, and the rest of the pages are failing to load entirely with a server fail message.

If you want to do a direct head-to-head comparison of boards, the only fair way to do that is to take a single high efficiency PSU, put it on a bench wattmeter, and then plug in your test targets with a bare minimum of accessories (minimal RAM, etc), and proceed to establish actual apple-to-apple numbers by booting into an OS that will actually idle the CPU in a C-state (not just sit at the BIOS "no boot device" prompt, which is often a loop) and then also a full-out CPU stress test. You seem to have mistaken the Anandtech results for a head-to-head comparison, which it clearly is not.
 

Mr P

Dabbler
Joined
May 25, 2016
Messages
15
What you say makes sense jgreco. I am appreciative of the time and effort you make to answer the queries (and I get the feeling you like arc welders!!!). I'm off to hunt around to see if I can locate a source for a Xeon D-1540 in the UK!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What you say makes sense jgreco. I am appreciative of the time and effort you make to answer the queries (and I get the feeling you like arc welders!!!). I'm off to hunt around to see if I can locate a source for a Xeon D-1540 in the UK!

Feel free to wait a little while and see if I can produce some more real numbers in the next week or so. I'm definitely interested in saving power; typically I don't mind spending on capex to save on opex. Back in 2011 we had migrated to a really nice cluster of Sandy/Ivy hypervisors in the office, but we rapidly hit the pain level for being limited by CPU and RAM. The E5 stuff is *great* for hypervisors and performance, but not so great on power consumption. The Xeon D stuff looks really weak on the per-core CPU basis but that might still be very workable if I can get a ton of memory.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
brando,

I've been reading about the mirrored VDEV's - they look and sound great. Was hoping to use the 5x6Tb's for the main storage array but not sure how to best utilise them in mirrored format - is it possible to have a "hot spare" assigned to the pool? If so I could have a 2 or 4 way mirror and a hot spare..

I'll most likely go for the 2Tb's to act as the backup pool in a Z2 format as you suggest.
I'm sure you could, I have striped VDEVs but I don't have a hot spare in place. You shouldn't really need one if it's in an accessible area, also the resilvering time for mirrors is a lot lower than RAIDZ so the risk of a disk dying is a lot lower.

Sent from my Nexus 6 using Tapatalk
 
Status
Not open for further replies.
Top