Best way to setup SSD for L2ARC

Status
Not open for further replies.

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
I have 2x128GB Samsung 840 pro SSDs that I've setup as L2ARC on my zpool. Currently I have one drive for the log and one drive for the cache.

I've been doing some reading and it seems there is a better use of these SSDs than what I'm currently doing, but I'm not really sure how it needs to be setup. What I've read is that it's best to mirror the log cache and then strip the read cache.

Is this correct? If so, how would I best setup my 2 SSDs to provide the best L2ARC for my zpool.

My main zpool is 8 x 2TB raidz2.

Thanks,
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
How much RAM do you ahve? What do you use your pool for?

I will tell you that logs should be 2 drives. So your two smart choices are to go with 2xlogs and no l2arc or do an l2arc and no log.

But most users are improperly using l2arcs and slogs and thought that throwing more hardware at the server makes things better. Hint: It doesn't.
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
How much RAM do you ahve? What do you use your pool for?

24GB of ECC DDR3.

It is mostly general purpose CIFS or NFS shares. No real dbase use. I'm probably reading alot more than writing.

I will tell you that logs should be 2 drives. So your two smart choices are to go with 2xlogs and no l2arc or do an l2arc and no log.

But most users are improperly using l2arcs and slogs and thought that throwing more hardware at the server makes things better. Hint: It doesn't.


This exactly what I was concerned about, as I dont' think I'm understanding how zfs is using these SSDs. Are you suggesting I create a mirrored vdev with the SSDs and use them for just for the log? It just seems logical that an SSD for both read and write cache would be of value, but apparently it isn't. Is this because zfs gets much more benefit from the ARC cache, so you don't need L2ARC?
 

gzartman

Contributor
Joined
Nov 1, 2013
Messages
105
I did actually read your NOOB guide (which is a great doc), but it was right when I first started using zfs, so I really didn't know what the hell I was doing. I'll go back and re-read it because it will probably make more sense now.

There was a blog I ran across recently that presented a couple scripts you run on your zpool to tell you if you could benefit from SSDs (one was a bash script and the other was perl). Seemed they led me to believe I could benefit from it, but it's probably a garbage in -- garbage out type of thing.

So what would I do in your situation? Remove both SSDs as they are doing you no good and could be making things worse. Your l2arc is probably starving your system of ARC RAM and your zil is a recipe for disaster if it fails at a bad time. Mail those to cyberjock in IL and I'll take care of them for you. ;)
I'll yank them and see run some Bonnie++ tests and see that I get. Maybe you'll end up with a couple SSDs. LOL

Thanks
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You know what the great thing about benchmarks is? If you don't know what you are doing you can do tests that give you the answer you want. Seriously. I got into a fighting match with someone a month ago because they did benchmarks and got 2GB+/sec and 10000k IOPS/sec from a pool of 6 disks. We all know that's impossible. But if you aren't in a position to fully understand in excruciating detail why each benchmark program is different from another, you probably aren't in a position to run the benchmark or even interpret its results.

To be honest, in your case, just pull the drives out of the pool and keep running your system. Money says you'll never know the difference. ;) I've never even tried to run a ZIL or L2ARC. Why? Because I can already do 100MB/sec+ to and from my server over my LAN. Do I really think I'm going to get more than Gb speed on my Gb LAN? Hell no.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Normally, if you HAD an actual need for an slog, the recommended config is a mirrored set of disks. That way if one slog were to fail you wouldn't trash your pool. Normally, if you lost an slog at an inopportune time it could potentially damage your pool unrecoverably, so having mirrors gives you redundancy from that bad situation.
Just looking for clarification here. My understanding was that:
  • In earlier versions of ZFS (< 19) loss of an SLOG = loss of the pool
  • In versions 28 and newer, this is no longer the case.
So my assumption was mirrors are no longer the recommended way to implement an SLOG. Am I mistaken?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
In v19 and earlier you could not mount the pool if you lost your slog. It was due to a software bug.

With v28+ you can still access your data, but a loss of the slog at an inopportune time can result in data loss, but not necessarily an unmountable pool.

One is all encompassing, significant and very permanent.

One is significant, permanent, but not necessarily all encompassing. If you have snapshots you can ignore the corruption by rolling back to the most recent snapshot.
 
Status
Not open for further replies.
Top