PCI SSD Options

Status
Not open for further replies.

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I'm just thinking forward here, our servers are working really well.

Our next phase is to increase our memory from 64gb to 256gb of ram I know this is a no brainer!

We have 1 Intel 530 256gb ssd for our Slog and 2 Intel 530 256gb ssd for our Mirrors Zil drives.

I'm wondering if it makes sense to replace those drives with faster drives ie. Samsung Intel Enterprise etc.

or does it make sense to get a Fusion IO card and run slog from that?

What have people tried?
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
Using a SLOG that replaced AHCI with NVMe would absolutely be worth it.

NVMe helps latency a lot, which is extremely important for the ZIL.

Check out the Intel DC P series, the P3700 right has to be the ultimate SLOG.. but ive yet to see someone actually use one.

Ignore any specs other than write latency and single queue depth throughput. Avoid any PCIE SSD based on older controllers like the Sandforce.. they will be worse than a new SATA drive.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I'm just thinking forward here, our servers are working really well.

Our next phase is to increase our memory from 64gb to 256gb of ram I know this is a no brainer!

We have 1 Intel 530 256gb ssd for our Slog and 2 Intel 530 256gb ssd for our Mirrors Zil drives.

I'm wondering if it makes sense to replace those drives with faster drives ie. Samsung Intel Enterprise etc.

or does it make sense to get a Fusion IO card and run slog from that?

What have people tried?
This is designed for exactly what you are wanting. IIRC it is what IX uses in truenas for the slog device.

https://www.hgst.com/solid-state-storage/enterprise-ssd/sas-ssd/zeusram-sas-ssd
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
the ZuesRAM is still a beast but at this point i wont let myself accept that there are no better options for the price.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
So correct me if i'm wrong, but if I was to increase my ram to 256gb I could get rid of my slog, and get an intel 3700 400gb drive for my ZIL and my box should be pretty damn good!
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
So correct me if i'm wrong, but if I was to increase my ram to 256gb I could get rid of my slog, and get an intel 3700 400gb drive for my ZIL and my box should be pretty damn good!
What you are referring to as your zil above is the slog. You will always have a portion of your ram allocated as ZIL. If your sync writes are slow you can add a slog device to help zfs deal with sync writes, however, your system will always have a ZIL. The slog should be power loss protected by super capacitor or battery, have a high write endurance, and be pretty fast (otherwise whats the point).

Edit: This is probably worth a read. https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Using the "gui" setup. I have two ssds cached for the Log Drive and one ssd for the cache drive.

Unless I have things confused, the log drive was the ZIL and the cache drive is Slog?

Hopefully I don't have my ssd's configured backwards!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think what you're calling a "cache" drive is L2ARC.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I'm wondering if I'm better off to take those two drives that i'm using for a mirrored zil drive and create a slog stripe from the 3 ssds?

What makes the most sense here? We have 1 hour backup battery and a generator with unlimited fuel. so power outages aren't a big issue here.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
jgreco you are right.

So I have two drives mirrored for my slog and 1 drive for l2arc.

Would I be better off to make a strip for my slog and run like that instead?
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
You don't want to stripe the slog, mirror is correct if you are going to use two drives for it. The slog probably shouldn't be more then about 8GB in size as well. The whole 256GB capacity would be incorrect.

You can stripe the L2ARC drive for better performance. However, with your current amount of ram you don't want to go over about 300GB in L2ARC
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Regardless of whether you have 64GB or 256GB, you are likely sooner or later to fill your ARC. I don't recall your application or your host specs, and you conveniently omitted that from your opening post, but at the 128GB+ scale, L2ARC is usually something worth considering. It can be relatively lower performance SSD and that's just fine.

SLOG (ZIL) is best served on a very low latency SSD, or a pair of them, mirrored.
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
Don't stripe your slog. Like i said, QD=1 performance and latency are what matters. Stripe won't improve performance.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Ok. So now we've sorted out what you're doing... with 256GB of RAM, you could feel free to take up to all three of those 256GB Intel 530's as L2ARC.

The 530 is not a great choice for a SLOG (ZIL) device. The S3710 is well-liked in this role and much faster. The P3700, which I'd love to order one of, ought to be even better, but as noted, we've not been seeing anyone trying one of these.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Ok, so if I was going to increase performance, I would increase the ram to 256gb of ram. Get an intel 3700 400gb for my slog, and take my 3 ssds and make a l2arc stripe?

My Host specs are:
Dual Intel(R) Xeon(R) CPU E5-2603 v2 @ 1.80GHz
DDR3 64GB Ram
20 2TB in Mirrored Vdevs
1 Spare 2TB Drive
LSI 12GB HBA and 12GB Backplane
2 Intel 530 240GB SSD for Mirrors Slog
1 Intel 530 240GB for L2Arc

We use the server to host our clients terminal servers and file servers.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
To be honest everything is working incredibly well. but we're adding more customers every month and I like to plan things in advance so I can just pull the trigger instead of reacting without a plan.

Do you think we would see a modest increase in performance if I swapped out the two intel 530's for two S3710 200gb drives? I'd still mirror the drives just because i'd be nervous about loosing a drive.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
In a terminal server environment? Is it currently busy? Too busy?

My take:

1) The 2603v2 is a contemptible CPU, even when doubled.

2) I'm happy you're not reporting problems with the LSI 12G stuff.

3) Your current system probably has about 20-40GB of ARC and 240GB L2ARC; let's say 256GB of read cache.

4) Your upgraded system could have 200-230GB of ARC (needs to be tuned right for this!) and then 768GB L2ARC using those 530's, which works out to almost 1TB of read cache.

5) Something better for SLOG will increase write responsiveness if you're reliant on sync writes, which you imply you are. I'm guessing if you bought 2603v2's that you're cost-sensitive so that is probably best done as a S3710, or s pair if you want to go that route.

Done right, the upgrades could be a real boost, but you might start hitting the limits of the underlying CPU.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Here is my arcstat:

[root@kspsan03] /usr/bin# ./arcstat

  • Put your data type(s) here...
  • 4.19TiB / 18.3TiB (ESXI_1)
  • 2.87GiB / 464GiB (freenas-boot)
  • 45.33GiB (MRU: 42.19GiB, MFU: 3.14GiB) / 64.00GiB
  • Hit ratio -> 83.21% (higher is better)
  • Prefetch -> 19.27% (higher is better)
  • Hit MFU:MRU -> 67.23%:30.09% (higher ratio is better)
  • Hit MRU Ghost -> 0.21% (lower is better)
  • Hit MFU Ghost -> 0.94% (lower is better)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm worthless today. I meant to point out that the improvement going from 3) to 4) there is one of quadrupling the read cache. L2ARC is kinda fast but ARC is hella fast. On my VM storage server, I was getting 600MBytes/sec to a single VM doing semi-random disk reads (about a dozen parallel requests for sequential ranges of blocks) when it was all in ARC, but that dropped to maybe 200MBytes/sec from L2ARC.

If your pool is already very busy with reads, this could have a tremendous impact on perceived performance, which is usually where terminal services suffer.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Here is my arcstat:

[root@kspsan03] /usr/bin# ./arcstat

  • Put your data type(s) here...
  • 4.19TiB / 18.3TiB (ESXI_1)
  • 2.87GiB / 464GiB (freenas-boot)
  • 45.33GiB (MRU: 42.19GiB, MFU: 3.14GiB) / 64.00GiB
  • Hit ratio -> 83.21% (higher is better)
  • Prefetch -> 19.27% (higher is better)
  • Hit MFU:MRU -> 67.23%:30.09% (higher ratio is better)
  • Hit MRU Ghost -> 0.21% (lower is better)
  • Hit MFU Ghost -> 0.94% (lower is better)

That looks hurty to me. Where's @Bidule0hm when he's needed?
 
Status
Not open for further replies.
Top