X11SSH-LN4F build advice.

Status
Not open for further replies.

Markymark

Cadet
Joined
Sep 12, 2016
Messages
1
Hi All,

First post so go easy! :)

After tinkering with FreeNAS as an iSCSI target for my ESXi lab with whatever hardware i found laying around, i'm making the leap and building a purpose built server. Whilst the intended use is primarily for ESX (iSCSI) i would like another pool for an archive/backup volume of around 30TB. Whilst it's running a home lab, it does have some live servers on there (game servers, email servers, etc) that i really don't want to loose but if i did, my world wouldn't end... There's around 20 VMs that have light use - idle most of the time really... except my Ark Survival servers that get hammered! ;)

Having read through many threads on these forums i'm looking at the following kit. Some hardware i already have laying around so would like to reuse where possible.

Case: X-Case RM424 Pro 24 bay (Existing)
Power Supply: Seasonic x1250 (Existing)
SAS controller: 2x Dell H200 flashed to IT (Existing) Not sure on these as Cyberjock comments that they don't handle S.M.A.R.T.?
SAS Expander: HP (I think) 24port expander (Existing)
Motherboard: X11SSH-4LNF (to buy). Seems people aren't using this board but i want the quad nics (1x management, 2x iSCSI and 1x CIFS). Also read that this board doesn't like add-on sas cards?
CPU: E3-1225-v5
RAM: 64GB (4x16GB) Samsung ECC M391A2K43BB1-CPB - Not checked the compatibility list for these so if you know of a better option...

Drives...
iSCSi volume
6x Samsung EVO 850 500GB in RAIDZ2 (existing) (looking for around 1.5TB useable but concious of fragmentation - is this a major concern on SSD?)
2x Intel DC S3510 120GB (SLOG) - looking at sync=always.

Other iSCSI volume for lab/slow/unimportant stuff.
4x Samsung 830 250GB (existing)
2x Intel 320 60GB (existing) for (SLOG) Is this a waste of time?

CIFS volume
12x WD Red TB (existing) i wanna fill this sucker up but the data is mostly static so assume fragmentation wont be so much of an issue?
2x Samsung 750 250GB (SLOG)

And I think that's it - I know there are more disks than bays but that's what duck tape is for! :D

Questions i have are.

1. Is the above reasonable?
2. Will the h200 boards be ok in this motherboard?
3. The fragmmentation issues - is it really a problem given my use case.
4. Do i need SLOG on the SSD volumes?

Thanks in advance!

Mark
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi All,
First post so go easy! :)
Welcome to the forum!
After tinkering with FreeNAS as an iSCSI target for my ESXi lab with whatever hardware i found laying around, i'm making the leap and building a purpose built server. Whilst the intended use is primarily for ESX (iSCSI) i would like another pool for an archive/backup volume of around 30TB. Whilst it's running a home lab, it does have some live servers on there (game servers, email servers, etc) that i really don't want to loose but if i did, my world wouldn't end... There's around 20 VMs that have light use - idle most of the time really... except my Ark Survival servers that get hammered! ;)

Having read through many threads on these forums i'm looking at the following kit. Some hardware i already have laying around so would like to reuse where possible.

Case: X-Case RM424 Pro 24 bay (Existing)
Power Supply: Seasonic x1250 (Existing)
SAS controller: 2x Dell H200 flashed to IT (Existing) Not sure on these as Cyberjock comments that they don't handle S.M.A.R.T.?
SAS Expander: HP (I think) 24port expander (Existing)
Motherboard: X11SSH-4LNF (to buy). Seems people aren't using this board but i want the quad nics (1x management, 2x iSCSI and 1x CIFS). Also read that this board doesn't like add-on sas cards?
CPU: E3-1225-v5
RAM: 64GB (4x16GB) Samsung ECC M391A2K43BB1-CPB - Not checked the compatibility list for these so if you know of a better option...
That memory part number is on the tested memory list at Supermicro's website, so it should be okay. I use an older 4-LAN Supermicro board w/o problems (see 'my systems' for details). I also own 4 of the Dell PERC H200 cards; they can be flashed to P20 IT mode and are functionally equivalent to an LSI 9211-8i, including all desired SMART features. The X11SSH board has two x8 PCI-E slots, which suits very well your plan to use a pair of the Dell HBAs. I can't speak to the issue of whether the X11SSH works well with SAS cards... though it's hard to imagine why it wouldn't.
Drives...
iSCSi volume
6x Samsung EVO 850 500GB in RAIDZ2 (existing) (looking for around 1.5TB useable but concious of fragmentation - is this a major concern on SSD?)
2x Intel DC S3510 120GB (SLOG) - looking at sync=always.
Standard recommendation for iSCSI volumes is to use mirrors and to plan on never using more than 50% of the total space.

Why use mirrors? Because IOPS are important for iSCSI performance, and IOPS scale with vdevs. With 6 drives configured as 3 mirror vdevs you would have 3 times the IOPS of a RAIDZ2 pool made up of the same 6 drives -- at the cost of storage space.

Why not use more than 50% of capacity? Because ZFS is a copy-on-write filesystem. So there are fragmentation issues, as you mentioned.

Does any of this matter for SSD-based storage? I believe so... but haven't tested it myself as I don't have the $$$ to set up a large SSD-based iSCSI system. :)

So your RAIDZ2 array, providing ~2TB (less overhead) is the 'wrong' topology and is too small to deliver the usable space you desire...

For a lab situation with low stress on the system? You could probably get by just fine with a RAIDZ2 array - I do in my lab. But I think you'll need more space than 6 x 500GiB SSDs will provide if you really need 1.5TiB of usable space.

Regarding SLOG... the benefit of a SLOG device is that it's faster than the pool it serves... so a SLOG may not be very useful with an SSD-based pool. The Intel DC S3500 device itself is okay as a SLOG device, but it's designed for read performance. A better choice would be the Intel DC S3700/3710, which is optimized for writes and has higher durability.
Other iSCSI volume for lab/slow/unimportant stuff.
4x Samsung 830 250GB (existing)
2x Intel 320 60GB (existing) for (SLOG) Is this a waste of time?
Same considerations apply as above, but if this is just 'disposable' stuff, consider simply turning synchronous writes off and dispense with a SLOG device altogether. Live on the edge! :)
CIFS volume
12x WD Red TB (existing) i wanna fill this sucker up but the data is mostly static so assume fragmentation wont be so much of an issue?
2x Samsung 750 250GB (SLOG)
Correct, it's okay to use more than 50% of capacity for this type of storage. Still, you never want to use more than 80-90% of any ZFS pool. For CIFS storage it's doubtful that you'll need a SLOG device, especially an old, slow device like the Intel 320 models, which also don't have the desired battery/capacitor power backup.

The WD Reds are good drives. Since you have 12 of them, I recommend setting up 2 RAIDZ2 vdevs of 6 drives each.
And I think that's it - I know there are more disks than bays but that's what duck tape is for! :D

Questions i have are.

1. Is the above reasonable?
2. Will the h200 boards be ok in this motherboard?
3. The fragmmentation issues - is it really a problem given my use case.
4. Do i need SLOG on the SSD volumes?

Thanks in advance!

Mark
You're welcome, and good luck!
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
SAS controller: 2x Dell H200 flashed to IT (Existing) Not sure on these as Cyberjock comments that they don't handle S.M.A.R.T.?
They work fine. That is what I use primarily.

As far as SLOG(s), there was some debate a while ago about using a SSD SLOG with a SSD Volume. If I recall correctly, it did actually increase performance. Don't recall all the details, but if I locate it I will post the link.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Regarding SLOG... the benefit of a SLOG device is that it's faster than the pool it serves... so a SLOG may not be very useful with an SSD-based pool. The Intel DC S3500 device itself is okay as a SLOG device, but it's designed for read performance. A better choice would be the Intel DC S3700/3710, which is optimized for writes and has higher durability.
Think bigger: P3700. SATA/AHCI is going to be a bottleneck.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Think bigger: P3700. SATA/AHCI is going to be a bottleneck.
Good point, @Ericloewe. @Markymark, what he means is that a SLOG device with data transfer rates much faster than the SSDs in your pool will indeed boost performance. And the Intel P3700 is one such device, and would plug very nicely into the 3rd, x4 PCIe slot on your motherboard. The disadvantage is the cost... :p
 

mjt5282

Contributor
Joined
Mar 19, 2013
Messages
139
I use that motherboard with several Intel SSD's (and iSCSI and VMWARE ESXI). I am a ESXI newbie. But it works. beta testing FreeNas 10.
 
Joined
Mar 22, 2016
Messages
217
From what I read when I was looking up a slog for all ssd pools was that it is beneficial due to the fact that the slog will condense all the writes for the last x seconds and lay them out sequentially on the pool instead of hitting the pool with random writes causing more fragmentation. I'm. It sure how true this is as I have no way to test it. But I saw it pop up a couple of times. Hopefully someone with more knowledge can chime in to provide a more thorough answer.


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
No, that's just what ZFS does anyway.

The point of a SLOG is that sync writes go immediately to a separate fast device, so that ZFS can report that they have succeeded without waiting to write the data to your actual pool devices.

So if your SLOG hardware is not significantly faster than your pool's data disks, there's no point.

No, without a slog, the sync write needs to be written immediately. With a slog, the sync write gets written to the slog, and then actually written when the transaction group is flushed.

I think what @CookiesLikeWhoa said applies.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
To test my understanding in the above discussion I'd speculate the following (and would appreciate any corrections) generalization:
A SLOG would not be beneficial in the case of an all-SSD pool storage, until the point of the "natural ZIL" on the pool, becomes starved for IOPS. Ie, the break-even point is where the SLOG can 'relieve' the SSD-pool.

If this statement is misinformed, please advice (and provide examples of what circumstances make this invalid)
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Hence having a SLOG or not makes no difference to how the written data is ultimately laid out on the pool
@cyberjock also stated the same in this Thread:
Having the slog/zil on the zpool itself vice an external device has no affect on where the data will ultimately be stored, and therefore has no effect on fragmentation.
A SLOG would not be beneficial in the case of an all-SSD pool storage, until the point of the "natural ZIL" on the pool, becomes starved for IOPS. Ie, the break-even point is where the SLOG can 'relieve' the SSD-pool.
Not sure about that. The way I interpret it is that having a SLOG (when sync=always) would alleviate the ZIL on the disk from having to respond. So instead of N x drives responding it is just the one (or two if mirrored). From the same thread:
With my setup, I have a 4-drive stripe of SATA (6Gbps) SSD mirrors (RAID10, basically). In sync=always mode I was able to basically double the synthetically-benchmarked performance by adding a SLOG. This was a major surprise to me, as:
  1. SSD SATA drives should be inherently fast.
  2. The SLOG drive I added is actually slightly slower than the SSDs it is supporting.
Now, I have no experience with NVMe, but the conclusion that I drew from my experiments is that regardless of the speed of the underlying zpool, there can be a benefit to a SLOG with sync=always mode. It should be trivial for you to test (and share results here), as SLOG configurations can be changed on the fly in a non-destructive, non-interrupting manner.

There still is another thread where someone did a bit more testing and showed the results with an all SSD Pool and variations of a single SLOG, mirrored SLOG, etc... That is the one I intended to link previously, but will have to re-locate it.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
A little more from "http://www.freenas.org/blog/zfs-zil-and-slog-demystified/"
By default, the short-term ZIL storage exists on the same hard disks as the long-term pool storage at the expense of all data being written to disk twice: once to the short-term ZIL and again across the long-term pool. Because each disk can only perform one operation at a time, the performance penalty of this duplicated effort can be alleviated by sending the ZIL writes to a Separate ZFS Intent Log or “SLOG”, or simply “log”. While using a spinning hard disk as SLOG will yield performance benefits by reducing the duplicate writes to the same disks, it is a poor use of a hard drive given the small size but high frequency of the incoming data.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
excellent @Mirfster. Thanks!
From now on - this makes intuitive sense.
keyword: hitting the pool twice during writing.

A powerful NVMe for SLOG and life becomes beautiful x)
 
Status
Not open for further replies.
Top