New build, looking for contructive critism

Status
Not open for further replies.

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
OK, so I have my new server built. I'm still putting thought into the build of my server and looking for further constructive criticism to make further improvements to give me a balanced server that provides capacity, redundancy, good speed with decent IOPs.
The use for the server will primarily be storage but I am considering storing VMWare VM's on my NAS as apposed to having a bunch of drives in the VMWare server itself. However not required.
Most of the time the server is used for local network file access by usually no more than about 6 users at any one time, a offsite backup server, a local backup server and file vault with the occasional GIANT file transfer when backing up client servers and such.
My preference is max speed file transfers and am planning on putting a 10GB backbone between the server and the switch so that my 1Gbps giant file transfers don't interrupt anything else going on.

Current build is as follows...
SuperMicro CS847A with X8DTH-iF
48GB of RAM
INTEL XEON 6 CORE PROCESSOR X5690 3.46GHZ 12MB SMART CACHE
5 LSI 9211-8i in IT mode
Currently 12 HGST 4TB 7200RPM Deskstar NAS Drives that I'm planning on configuring as 2 pools 8 RaidZv2 once I get the rest of the drives
2 Intel SSD Pro 2500 240GB drives for L2ARC
1 Corsair 60GB SSD for ZIL (Being this is a MLC drive I'm considering swapping it for a SLC)
1 Supermicro SATADOM 32 GB Internal Solid State Drive on order (USB boots way to slow!)

and I'm considering putting in a SLOG SSD though I don't know if I actually need it, if it would even be beneficial or if it would just put my pool at risk due to the fact that the SLOG's would be standing on one SSD leg, per-say.

I'll also be running a Plex Server and hopefully at some point CrashPlan and OwnCloud as Plugins.

Thoughts? Ideas? Input?

(Edit: Forgot to add a few things.)
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
If you really do plan to do this...
I am considering storing VMWare VM's on my NAS
then you might need this...
I'm considering putting in a SLOG SSD though I don't know if I actually need it

put my pool at risk due to the fact that the SLOG's would be standing on one SSD leg
If you need a dedicated SLOG device, you also need the right type, i.e. one that guarantees to complete pending writes if the power fails.
Do you know the difference (and similarity) between ZIL and SLOG?

Seems like you need to do some more reading.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
I have done quite a bit of reading though admittedly I do have much more to learn. I'm just leaning about ZILs and SLOGs which is why I brought it up. My only concern is that if the SLOG was to fail would it destroy my pools and there by destroy by stored data on the pools?
I do realize these are MLC drives and I should replace them with SLC drive for long term reliability. Something I must have missed months back when I put them in. I think one of the drives I had laying around so I thought, "I'll just use it as a ZIL", though still doubtfully a good idea for reliability sake.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I'm just leaning about ZILs and SLOGs which is why I brought it up.
The basic concepts you need to grasp are that every pool has a ZIL, which stands for ZFS Intent Log, and that for certain workloads, it makes sense to get the ZIL off the main storage and onto a dedicated device, i.e. an SLOG.
My only concern is that if the SLOG was to fail would it destroy my pools and there by destroy by stored data on the pools?
This would be true for earlier versions of ZFS, but I think now you would only lose the uncommitted transactions, which could be disastrous or not, depending on your application. However, I'm no SLOG expert.
I do realize these are MLC drives and I should replace them with SLC drive for long term reliability.
There's more to it than MLC vs SLC, as I mentioned above.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
OK. Well I did download that guide and will be going through it so I won't beat a dead horse.
If you were to make any changes to such a system what would you do?
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
I guess ZIL and L2ARC had me somewhat confused and I'm still wrapping my head around it a little bit but I understand for the most part what their functions are.
As far as I can see it looks like I don't really need them as most of what I do isn't extremely intense. I could see how it would be useful in a multi-user environment or with multiple very busy VMs.
For now I think I'll finish my stress test of the drives.

So thank you for that.

Anyone else have any suggestions or ideas?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You need a SLOG with power protection (like capacitors) built into the SSD itself. (and as Robert mentioned, you said ZIL, but meant SLOG).

You need to at least double your RAM for that much L2ARC.

Look into the SC847E16 variant instead of the "A". The cabling is much easier, and you would only need 1 HBA instead of 5.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I guess ZIL and L2ARC had me somewhat confused
I was wondering about that.
As far as I can see it looks like I don't really need them as most of what I do isn't extremely intense
Looks like you're getting the hang of it. There is a lot to learn, and no shortcuts if you want a good outcome, other than buying an off-the-shelf system.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Remove the ssd's and make 2 RAIDZ2 vdevs with 6 disks each.

Parity RAID is anathema to VMFS if you care about performance. In very limited homelab scenarios you can survive, but you don't build a 12-drive hexcore with 48GB of RAM to hamstring it.

@Visseroth - you will want to use six mirror vdevs of 2 drives each, which will cut your storage space in half. There's also a rule of thumb for maximum performance that says "don't use more than 50% of your pool" which would reduce it again. That would give you roughly 12TB before performance starts to dip. Safety won't be compromised, but pool fragmentation starts to take more of a toll as you fill it up more.

SLOG devices are about safety. If you are only making VMs "for play" and are willing to run the risk of losing your data/corrupting your VMs, and you have a good backup strategy that can mitigate the damage, you can run without an SLOG in async write mode. However, consider the cost of your lost time.

Proper SLOG devices as @depasseg mentioned will still commit writes even if power is cut. Good options include the Intel DC series SSDs; despite being MLC they are still quite capable. The NVMe models are best if you can spare a PCIe slot, but the SATA models are fine as well.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
a rule of thumb for maximum performance that says "don't use more than 50% of your pool"
IIRC, the 50% thing only applies if sharing via ISCSI.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Yea, and this is exactly why I posted up my system and specs and plans, I wanted some criticism to make sure I didn't have some wrong ideas, and you guys pointed out some wrong ideas! Thanks! You are appreciated!

@depasseg I unfortunately or fortunately, however you want to look at it already have the SC847A. I'm kind of old school I guess in the fact that I like direct connection to each of the drives. One less layer of what the heck to figure out if things don't work as expected :)
So yea, everything there I already have but that doesn't mean I can sell something to replace it with something else or re-purpose. For instance I'm thinking I may sell the SSDs and get one of these Intel DC or Capacitor SSDs for the SLOGs, though when I was looking around in the web GUI I didn't see a option to build a SLOG (maybe someone can point out how to build one of those) or I overlooked it.
Granted I'm not real concerned about power outages as I do have 2 battery backups on my servers and one of the battery backup is secondary to the smaller one so that if the first battery backup fails the second will hold long enough for everything to shut down and likely then some. I modded the second battery backup and put 2 deep cycle 160AH batteries on it (it was used and given to me), granted in series at 24V but will still hold longer than the 48v 60AH (don't quote me on the AH, I haven't looked at the batteries in a while) that is the primary.
And I'd double my RAM but 6x16GB sticks don't come nearly as cheap as the 6x8GB sticks that I currently have. 16GB sticks are required to be Registered/buffered. My 8GB sticks are just ECC. Or I'd have to get another matching CPU and then get 6 more 8GB sticks which would be almost as much as just replacing what I have.

I'm just trying to get the most bang for my buck without compromising on everything. I mean heck, it's a SuperMicro 36bay case, 6 core 3.4Ghz CPU with 48GB of RAM. Pretty decent I'd say, I just don't want to )(*& it up if ya know what I mean.

But yea, I have been giving the configuration of the pool further thought. I thought about going 8x2 with room for growth to 2 more vdevs which leaves 4 slots open for SSDs or I could put the SSDs inside the case or I could go 6x2 with what I have which gives me room for growth for 4 more vdevs. Heck I even thought about doing 4 drives in 3 vdevs which leaves me room for 6 more vdevs but man I would really be loosing some space. Granted my redundancy would be really redundant. Mind you all these are RaidZ2 configurations.

I'm liking the 6x6 idea as it gives balance between growth, storage capacity while still maintaining speed. Really if speed was my complete end goal I would have gone all SSD :) Not to say I can't make a pool for that later if I need no anyhow :)

@HoneyBadger Yea, my VMs are mostly for experimenting, play, ect. Some of them do some work but nothing seriously IO intense except maybe on the CPU and RAM at times. I'm thinking it may be a better idea to keep any such IO intense or critical VMs on the VM machine anyhow. Simplifies things a bit.
And I like those NVMe models. Hopefully they make them for a 2U and I do have 1 PCI express slot left. I had a CoRAID card there. I think it's a wired 10Gb card with a 4GB buffer and a battery backup but I'm not completely sure as I was unable to find any other identifying marker on the card. The only thing that made me think it might be a 10Gb card besides the 4GB stick of RAM was the fact that it doesn't have a typical LED light set on it. It's blue and white which turns red when the machine posts.
I don't really know what it is other than a card with blue/white/red LED, 2 NICs which do indicate link status when plugged into a switch with 4GB of RAM, a Intel processor of some sort on with with a heatsink and a battery backup.

I'll attach a picture, maybe you guys have a better idea as to what it is.....


Edit: I looked at the Intel NVMe's. Seriously nice, are indeed 2u so it's on my wish list. This server and other bills have left me kind of strapped atm. So probably early next year I'll add it in. We shall see :)
 

Attachments

  • 20151109_225917.jpg
    20151109_225917.jpg
    554.4 KB · Views: 208
Last edited:

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Remove the ssd's and make 2 RAIDZ2 vdevs with 6 disks each.

I think I'd have to agree.

BTW, I seriously like that little board you have for your server (listed on your signature). 12v input too? Very nice! How well does it perform?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
IIRC, the 50% thing only applies if sharing via ISCSI.

It's more important with iSCSI than NFS, because of the block vs file level allocation, but still a good idea regardless.

I had a CoRAID card there. I think it's a wired 10Gb card with a 4GB buffer and a battery backup but I'm not completely sure as I was unable to find any other identifying marker on the card. The only thing that made me think it might be a 10Gb card besides the 4GB stick of RAM was the fact that it doesn't have a typical LED light set on it. It's blue and white which turns red when the machine posts.

I can see a Marvell GbE chip there, but I don't know what's under the heatsink. I want to say it's an external HBA/RAID card though since a socketed RAM-cache upgrade for a network card would be a bit of an oddity. Moot point though if you're going to replace it with an NVMe SSD down the road though.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Well that and I don't really know what to do with it and why would a raid card have 2 NICs on it? Guess I'm not sure how that would connect to a external raid.
I dunno, and being as such I doubt it's really needed or that it would be handy.
I'll do further digging and see what I can find on it.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
yea, they're RJ45's and I plugged one into my gigabit switch and it linked, so I'm not sure either. I asked a buddy if he knew what they were but haven't heard back and I was going to email CoRaid but they went out of business.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
OK, so I did some stress testing of the drives. One drive seems a bit off from the rest. I wanted to get some input.... What are your thoughs? Within tolerance or something I should investigate?

Selected disks: da0 da1 da2 da3 da4 da5 da6 da7 da8 da9 da10 da11
<ATA HGST HDN724040AL A5E0> at scbus0 target 0 lun 0 (pass0,da0)
<ATA HGST HDN724040AL A5E0> at scbus0 target 1 lun 0 (pass1,da1)
<ATA HGST HDN724040AL A5E0> at scbus0 target 2 lun 0 (pass2,da2)
<ATA HGST HDN724040AL A5E0> at scbus0 target 3 lun 0 (pass3,da3)
<ATA HGST HDN724040AL A5E0> at scbus9 target 0 lun 0 (pass5,da4)
<ATA HGST HDN724040AL A5E0> at scbus9 target 1 lun 0 (pass6,da5)
<ATA HGST HDN724040AL A5E0> at scbus9 target 2 lun 0 (pass7,da6)
<ATA HGST HDN724040AL A5E0> at scbus9 target 3 lun 0 (pass8,da7)
<ATA HGST HDN724040AL A5E0> at scbus9 target 8 lun 0 (pass9,da8)
<ATA HGST HDN724040AL A5E0> at scbus10 target 3 lun 0 (pass10,da9)
<ATA HGST HDN724040AL A5E0> at scbus10 target 4 lun 0 (pass11,da10)
<ATA HGST HDN724040AL A5E0> at scbus10 target 5 lun 0 (pass12,da11)
Is this correct? (y/N): y
Performing initial serial array read (baseline speeds)
Fri Nov 13 02:06:39 PST 2015
Fri Nov 13 02:33:39 PST 2015
Completed: initial serial array read (baseline speeds)

Array's average speed is 159.323 MB/sec per disk

Disk Disk Size MB/sec %ofAvg
------- ---------- ------ ------
da0 3815447MB 160 100
da1 3815447MB 160 100
da2 3815447MB 160 101
da3 3815447MB 160 100
da4 3815447MB 160 100
da5 3815447MB 159 100
da6 3815447MB 160 100
da7 3815447MB 159 100
da8 3815447MB 159 100
da9 3815447MB 160 100
da10 3815447MB 160 100
da11 3815447MB 155 97

Performing initial parallel array read
Fri Nov 13 02:33:39 PST 2015
The disk da0 appears to be 3815447 MB.
Disk is reading at about 160 MB/sec
This suggests that this pass may take around 396 minutes

Serial Parall % of
Disk Disk Size MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0 3815447MB 160 160 100
da1 3815447MB 160 160 100
da2 3815447MB 160 159 99
da3 3815447MB 160 160 100
da4 3815447MB 160 159 99
da5 3815447MB 159 159 100
da6 3815447MB 160 159 100
da7 3815447MB 159 159 100
da8 3815447MB 159 159 100
da9 3815447MB 160 160 100
da10 3815447MB 160 159 99
da11 3815447MB 155 158 102

Awaiting completion: initial parallel array read
Fri Nov 13 11:05:36 PST 2015
Completed: initial parallel array read

Disk's average time is 30184 seconds per disk

Disk Bytes Transferred Seconds %ofAvg
------- ----------------- ------- ------
da0 4000787030016 30717 102
da1 4000787030016 29923 99
da2 4000787030016 29802 99
da3 4000787030016 30051 100
da4 4000787030016 30428 101
da5 4000787030016 30183 100
da6 4000787030016 30220 100
da7 4000787030016 30450 101
da8 4000787030016 30137 100
da9 4000787030016 30199 100
da10 4000787030016 30028 99
da11 4000787030016 30067 100

Performing initial parallel seek-stress array read
Fri Nov 13 11:05:36 PST 2015
The disk da0 appears to be 3815447 MB.
Disk is reading at about 35 MB/sec
This suggests that this pass may take around 1822 minutes

Serial Parall % of
Disk Disk Size MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0 3815447MB 160 32 20
da1 3815447MB 160 32 20
da2 3815447MB 160 35 22
da3 3815447MB 160 34 22
da4 3815447MB 160 35 22
da5 3815447MB 159 32 20
da6 3815447MB 160 34 21
da7 3815447MB 159 34 21
da8 3815447MB 159 35 22
da9 3815447MB 160 36 22
da10 3815447MB 160 31 19
da11 3815447MB 155 33 21
 
Status
Not open for further replies.
Top