Using PCIe SSDs for read/write caches

Status
Not open for further replies.

alevene

Cadet
Joined
Dec 23, 2014
Messages
5
I have a Dell R515 in storage and plan to run FreeNAS to see how is competes with the commercial products. I plan to install 3 Samsung XP941 PCIe SSD drives for read and write caches.

So here are the questions -
1 Has anyone installed this type of SSD on adapter cards in the 3 riser PCI-e slots? If so, are there any physical size or comparability issues?
2 I plan to experiment with VMware 5.5 to either run the FreeNAS under it, assigning the SSDs as read/write caches, or run VMware and assign spinning drives and the SSDs as RAW drives that can be accessed outside of VMware. Any thoughts on doing either of these techniques?
3 Same as 2 but using Hyper-V, Type 1.

Any real world experience will be appreciated.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
For 2, you'll want to read all the hate posts on running this product virtually. If you are planning to compare it against other products, it will be better running on bare metal as its designed. It's not that you can't virtualize, it just wont be as good or stable, reliable, etc.

I can't answer the size and dimension part of you first question, but you need to be sure you've read all the posts about slogs and l2arc. If you do an l2arc without 64 or more GB of ram, you'll actually reduce system performance.

Maybe you already know all this, but I didn't see you system specs as forum rule requires. Maybe missing it because I'm using my phone.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
freenas nor zfs is a system you can benchmark without deep knowledge. are you sure that the pci ssd will work with freenas?
 
L

L

Guest
Somewhere in the 9.3 docs I have read better virtualized performance.. can't remember the exact text.

1) The manufacture of those pci flash cards is critical. There are some good ones and bad ones(really bad ones). These actually are ideal for zil as it's workload is serial and a lower latency device is better and you should have much less latency when you on the pci bus rather than down a sata channel. The downside is if you have to move your workload to another box, you have to open the cover and pull the card.

2) Freenas gives value in big managed cache for the vm's. If you have several vm's all getting files from the freenas vm, it maybe able to cache similar blocks. If you create vm's using snapshot and clone it will call the same blocks/files for the os and most likely have those items in cache. Also now in 9.3 you can have snapshot synchonized between the vm and zfs. Very useful.
 

alevene

Cadet
Joined
Dec 23, 2014
Messages
5
Let me clarify. This R515 rack server will be built with with 64GB RAM, 6 drives setup in a mirrored stripe. If I run under VMware other drives will support the OS images that I use. I've never worked with PCI-e SSDs and ZFS. I don't know if there are issues with them being seen by the OS either as a standalone NAS or through VMware or VMware using RAW drives for the FreeNAS part.

As you cannot boot the system into two OS's, FreeNAS and VMware (or Hyper-V) simultaneously and as I don't want to use a 12 core system to run FreeNAS alone, I'm working on a solution to accommodate a variety of OS's running under VMware (possibly Hyper-V) that could include FreeNAS either as an image, or via RAW or just move my VMware images to the R515 and off of a Dell T310 with 6 drives, two external using external SATA toasters in which two drives are plugged in.

The T310 could become a dedicated FreeNAS box if I can install the RAM I'll need, plus the caching drives. Then the problem is, does FreeNAS have drivers for the ASM1061 chipset on the eSata card?

Finally, do I need a HBA for the R515. The current Dell supports the backplane that manages a maximum of eight drives.

In essence, I'd like some pointers from people who have done this and can offer practical advice as to best practices. Thanks a lot.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
The kids are still being gentle. RAW drives under VMWARE will cost you your data. Please read a few hundred threads for verification.
IF you read the sticky's you'll see virtualizing FreeNAS is evil/very bad.
Then you'll discover vt-d of your hba is dangerous, but less evil/very bad. And we will let you hang yourself and lose your data.

The PCIe ssd's you've picked have NEVER been mentioned on FreeNAS. So if they are supported by the nvme driver in bsd then you will be running a bleeding edge solution that no one has tested. I'd love to know if it works, but I'd buy one not three were I you. I'd also search high and low for success under FreeBSD before I purchased them.

Read jgreco's posts on best practices. Basically you are wandering into no man's land for support when things go south. Most won't touch anything to do with VM's. Sometimes a few of us throw out a bone or two under "fight club" style rules.

I'm really getting in too early. You should be raked over the coals mercilessly for a while first. Good luck. Here be dragons.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You should be raked over the coals mercilessly for a while first.

I thought Cyberjock had given up on that.

Good luck. Here be dragons.

Ooo. Damn, always out of marshmallows when there's a fire handy.

I have a Dell R515 in storage and plan to run FreeNAS to see how is competes with the commercial products. I plan to install 3 Samsung XP941 PCIe SSD drives for read and write caches.

It'll compete poorly if you don't understand it. ZFS doesn't have any support for an SSD write cache. It does have support for something called a SLOG device, which most definitely isn't a write cache even though it might seem like it is.

Now you jerks have me thinking about PCIe SSD's. I'm going to bludgeon someone. ;-)
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
esata?? come on. what's next..? those fancy sata port multiplex cards...
which type of data do you want to serve?
 

alevene

Cadet
Joined
Dec 23, 2014
Messages
5
I'm glad that I'm getting some response, mostly negative about my approach. The reason for posing my questions is to avoid spending time and money setting up a FreeNAS box. So let me change my question... if you have a powerful 8 bay rack server collecting dust and want to run FreeNAS on it, what build would you do? Please give parts and rationale, excluding the nominal 64GB RAM as a starter.

As I wrote, I think that for a small office it's overkill to use the 8 bay box for just FreeNAS, but it appears that virtualizing it with VMware or Hyper-V is not recommended.

Thanks and a happy Christmas to all.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
first: the purpose for freenas ist NOT to recycle old hardware. it is also not a system where you can setup a system and forget about it. you will have to learn much about zfs if you expect speed and/or reliability. this is also one of the reassons, virtual systems suck...you will find a lot of posts where people lost there data.

if you are willing to spend time to learn and spend time with zfs, you will get a good working horse. if you ignore it....it will kill your data. as simple as that.


as long as you do not provide us with your usecase, for example serving iscsi for multiple virtual guests and so on, you can not expect us to give you some recommandations. if you just want to play with it, go on. you will have to learn about slog/zil, different vdevs, striping them, raidz, l2arc, sync and async writes, recordsize and so on. freenas is it's on little universe. specially the sysctl part is nice and great.

ram depends on you usecase, the needed zfs features like dedup and l2arc.
 
Last edited:

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
I'm glad that I'm getting some response, mostly negative about my approach. The reason for posing my questions is to avoid spending time and money setting up a FreeNAS box. So let me change my question... if you have a powerful 8 bay rack server collecting dust and want to run FreeNAS on it, what build would you do? Please give parts and rationale, excluding the nominal 64GB RAM as a starter.
zambanini is right about not using inappropriate parts. If you want to build a NAS with whatever you have, I would personally recommend considering a different product. People say that NAS4Free runs better on older hardware, or maybe has more wiggle room for parts, but I don't know this for sure and don't use that product.

FreeNAS is pretty great, but only if you use the right stuff, and then set it up the right way. It's not that you can never use older stuff, or stuff you have laying around, you'll just need to be willing to accept that it might lower performance (best case scenario), or risk data loss (worse case scenario).

If you're doing this for business (I can't remember), that can be a resume generating event, so I wouldn't want to risk it.

My system is an example of what I mentioned. It's an old HP server. It works great, but I'm using old SAS drives and enclosures that are way under performing for some reason, so I get decent IOPS for my VMware servers, but the overall throughput of the disk system / pool is terrible. Someday I'll try to figure out why, but it works very well for my VMs work load. I don't have it configured perfect, but I can tolerate a data loss and restore from backup.

There's probably nothing wrong with the MB/Processor of an R series dell server (don't have a lot of experience). For FreeNAS, you're going to want intel NICs if you can, everyone likes them, they work best. If you have to deal with broadcom NICs like my HP server, they work, just people say there are issues some times. Mine worked fine. I also bought additional intel adapters for iSCSI.

Your ECC 64GB RAM is a good start on memory. It even let you do an L2ARC since everyone says that is bare minimum, but better be cautious about it. It still might not be enough and could lower performance for read cache. jgreco has some good posts about L2arc and SLOG. As mentioned, you'll need to start reading about what a slog is and isn't. It's not a write cache, it helps with SYNC writes.

Sorry I can't be more helpful, but if you get past the occasional hateful comments you get for not knowing what you're talking about and keep learning, you'll find this a very capable product, as long as it's truly what suites your needs.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Sorry, I tuned out the second you started saying stuff like "don't tell me I need 64GB of RAM". We're not here to sugar coat a problem and provide a non-working solution. If you need that much RAM, then you need that much RAM.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Virtualizing is often completely possible. The trouble is, when something goes awry, it is nearly impossible for those of us here to have sufficient clairvoyance into what you've done to understand what went wrong and how to fix it. We've just seen so many failures. That said, most of my FreeNAS boxes are virtualized. It adds massive complexity to the picture but I understand what is going on and how to recover if there's a problem. Basically it boils down to "if you need to ask, you shouldn't do it." But even for those who don't feel the need to ask, there's a collection of tips and clues about how to have the best chance at success.

One part of the problem: Here in the forums, we're not big fans of the AMD stuff, and virtualizing on top of that would be difficult anyways, because the preferred method is to use VT-d and nobody here that I can recall has discussed AMD's version (AMD-Vi) for virtualizing FreeNAS.

Another part of the problem is you haven't given much discussion to what you're trying to accomplish. On one end of the spectrum we have easy tasks like archival ISO storage, while at the other end we have tough tasks such as database or VM disk storage. Your last message says "small office," and for that, if we were to assume you mean it's for document and normal file storage, you may not need such a large system. The office fileserver here lives on an 8GB virtual machine with four 6TB drives in RAIDZ2. It gets a bit slow if there's a lot going on but normally it's just fine. I can throw more RAM or cores at it if I need, but I haven't felt compelled to yet.

For general small office use, I'd guess that the R515 is kind of a waste, and 64GB would be a waste. You seem to agree.

There seem to be a few paths forward:

1) Dedicate the machine to FreeNAS and just go the easy route

2) Build a different server for FreeNAS which is also relatively easy

3) Try some virtualization strategy. If you needed just to store a terabyte or less of fileserver data to serve out via CIFS, as an example, you can actually fully virtualize the FreeNAS. See this link. https://forums.freenas.org/index.ph...ose-seeking-virtualization.26095/#post-164810
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oh, and, yes, I blame you for causing me to finally go write a third virtualization sticky. Hang your head in shame.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Yeah, and you fail yourself jgreco. You said "if you have to ask you shouldn't do it", but then you explain how to do it. So apparently they *can* ask and you *will* tell because you just did.

That's the conundrum though. Isn't it? If you know enough to do it properly, you should have zero questions and a quaint little forum is not going to tell you anything you don't already know. So why even say it? All you're doing is giving info to people that *do* have to ask, but then telling them they shouldn't do it. /shamed
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, and you fail yourself jgreco. You said "if you have to ask you shouldn't do it", but then you explain how to do it. So apparently they *can* ask and you *will* tell because you just did.

Actually for the case I described, almost any idiot can do it. I think most of us here in the forums just view it as an edge case that's not very useful, because the amount of storage being managed is trite, and/or the cost to implement it is way high compared to what cheapskates like to pay. Our usual problem is people trying to come into the forums to do some spectacular catastrofail involving raw device mapping hacks and 4GB RAM on a 16GB whitebox made out of 2008 era parts. That's what's caused us to focus on "virtualization == bad". It doesn't actually mean "every possible virtualization is always bad."

Please keep your eye on the ball, Cyberjock. We're trying to help users. I've been remiss in my self-assigned task of documenting this sort of thing, because what I just described has ALWAYS been possible and has ALWAYS been safe within the terms I set out in that thread - and we've more or less been telling users "virtualization == bad" without qualifying that to mean what we REALLY mean... which is approximately "most cheapskate strategies to virtualize FreeNAS by compromising integrity of subsystems and doing foolish things == bad".

That's the conundrum though. Isn't it? If you know enough to do it properly, you should have zero questions and a quaint little forum is not going to tell you anything you don't already know. So why even say it? All you're doing is giving info to people that *do* have to ask, but then telling them they shouldn't do it. /shamed

Well, there's a third case. Sometimes you get some n00bsauce80 guy who's obviously got a bunch of general clue and a willingness to do the legwork, and what he really just needs is a primer on the ins and outs of it -- a push in the right direction. I like to give pushes in the right direction. If it happens to be a push off a cliff, I like to be kind enough to mention that there's a cliff. Up to you to have a way safely down.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Yeah.. got my eye on the ball.. still don't think you are helping anyone with that kind of post. :P

But that's just my opinion.
 
Status
Not open for further replies.
Top