Zil or L2ARC

Status
Not open for further replies.

warllo

Contributor
Joined
Nov 22, 2012
Messages
117
Hi all,

I am looking to setup a FreeNAS box to use as shared storage for vmware. I will be utilizing NFS to access the FreeNAS device.

I know I will need to use a Zil or L2ARC for the faster sync writes. Which would work best for my needs?

I was thinking about picking up a Intel 100GB SSD DC S3700 as zil device if this would be the best solution. Please advise!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Need information on your actual system specs to be able to provide any type of informed answer.

Simply slapping in a SLOG or L2ARC is not always beneficial and can even lower performance depending on the situation. ;)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am looking to setup a FreeNAS box to use as shared storage for vmware.
Can you give more details on your planned implementation because details make a big difference.
I know I will need to use a Zil or L2ARC for the faster sync writes. Which would work best for my needs?
There are some 'variations' in terminology, I was just talking about this with someone earlier today. Don't be surprised if someone doesn't try to correct you on your wording here. Definitely, you want the separate log device so you are not penalized by writing the same data to the pool twice.
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
I was thinking about picking up a Intel 100GB SSD DC S3700 as zil device if this would be the best solution.
It may not be the "best" but it is probably the best bang for the buck. The Intel® Optane™ technology is the fastest, but the price...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

warllo

Contributor
Joined
Nov 22, 2012
Messages
117
Thanks for the advise thus far. I'll elaborate a bit more.

I have an aging HPdl380-G7 that is running as an a ESXI host and is using Direct Attached Disks for it's storage.

I have outgrown it's internal storage capacity however it's processing capabilities and memory spec adequately meet my needs so rather than replacing the whole server I was hoping to add a FreeNAS box to provide additional storage for Vm's.

I will in the short term future be adding another ESXI host and running Vsphere for High Availability and Failover which requires shared storage. I was hoping this could be FreeNAS.

The Virtualized guests are hosting some high traffic data bases with lots of reads and writes. The application utilizing the data bases require around 4,000 IOPS or better. The IO mix is about 75% write and 25% read.

Not sure what other info would help, however if I missed anything please let me know. Thanks again for the help.

Also note, the hardware for the FreeNAS box has not been purchased yet. I would like to keep the cost as low as possible of course.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I will in the short term future be adding another ESXI host and running Vsphere for High Availability and Failover which requires shared storage. I was hoping this could be FreeNAS.
There are several forum members doing just that, but there are some special considerations required when configuring ZFS to good performance in this use.
require around 4,000 IOPS or better. The IO mix is about 75% write and 25% read.
From that I will guess this is a buisiness? Is there a requirement to buy new hardware or can re-marketed hardware be used?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I would like to keep the cost as low as possible of course.
Not to be difficult, but it would be helpful to know what part of the world you are in. It does me no good to suggest you buy something only to find out you are in another country and can't order from that vendor because of shipping cost.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
You will definitely want a low latency SSD (FAR more important than fast sequential write speed) SLOG. As for the L2ARC, how big is your database? I'm guessing with your 25% read 75% write ratio, L2ARC may not be worth it. Do you know if the same data is read often or if it's more random old and new data? We could talk about how your database is setup too. If its all fixed record sizes, you can do some extra tuning in ZFS and set up separate datastores just for the DB. Dont forget to look into using pvscsi adapters in the VMs.

Side note to directly answer your question. You will NEED a SLOG. SLOGs are for writes, L2ARC is for reads.

EDIT: Dont even bother trying this without 10gbe...
....I sure wish FreeNAS did not strip out fibre channel support...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Anyone ever do testing on the Intel 750 series?

I bought this a couple years ago for a gaming drive and wonder how well it would do as a SLOG drive?
As I recall, there are a some problems with it.
First, it doesn't have as much endurance
Second, the latency isn't as good
Third, it doesn't have power loss protection,
That last one is kind of critical, because it is part of the reason for SLOG to begin with. You would only ever read from the ZIL / SLOG if you were recovering from a crash where the system went down before the transaction group was written.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
4,000 IOPS or better
I did a little 'back of the napkin' math and it will take a lot of hard drives to get that many IOPS, estimating 150 IOPS per drive, which is generous for some drive, but others do better. It will still be a quantity of hard drives. Were you thinking to use SSDs? That would be easy on the IOPS but harder on the capacity. Which brings me back to the question of how much storage are you looking for?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
A couple of loaded 24 bay SAS shelves would do it. With a well built server and SLOG for you ZIL. Maybe not cheap but enterprise storage never is.

I don't see any info about capacity needs or your current storage configuration. Is that HP not only an 8 bay? You must have SSDs in there.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
A couple of loaded 24 bay SAS shelves would do it. With a well built server and SLOG for you ZIL. Maybe not cheap but enterprise storage never is.
That is the number of drives I came up with, but I was thinking of using a single 48bay chassis.
 

warllo

Contributor
Joined
Nov 22, 2012
Messages
117
There are several forum members doing just that, but there are some special considerations required when configuring ZFS to good performance in this use.

From that I will guess this is a buisiness? Is there a requirement to buy new hardware or can re-marketed hardware be used?

I would prefer to avoid re-marked hardware.

I only need around 12tb of storage.

We are based out of Minnesota.

Solid States would potentially be fine but I question durability issues.

We are already using a 10gbe backbone for our network.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I only need around 12tb of storage.
I don't want to go out on a limb and start making suggestions without getting enough information.
Is this a business?
Are you going to build it from parts or buy something ready made?
I feel like 12TB is a lot for SSD storage, unless you have deep pockets because that many drives will not be cheap.
That said, I don't imagine that all of your storage needs to be on SSD. Can you give some breakdown of the storage into logical units where some can be slower and other can be faster. This is intended to help you buy the right fit instead of just using a sledgehammer and going full SSD.
 

warllo

Contributor
Joined
Nov 22, 2012
Messages
117
I don't want to go out on a limb and start making suggestions without getting enough information.
Is this a business?
Are you going to build it from parts or buy something ready made?
I feel like 12TB is a lot for SSD storage, unless you have deep pockets because that many drives will not be cheap.
That said, I don't imagine that all of your storage needs to be on SSD. Can you give some breakdown of the storage into logical units where some can be slower and other can be faster. This is intended to help you buy the right fit instead of just using a sledgehammer and going full SSD.

It will be used for business we are a startup and cost is a concern. Hence the FreeNAS route. We don't expect this to be extremely cheap but if you have received a quote for a SAN they are outrageous.

The data base that must meet the IOPS requirements is around 1.5tb in size, I don't expect very much growth, the data will change but shouldn't grow substantially.

Would prefer to build it ourselves as it could become an integrated part of our solution for our customers.

Thanks again.
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Yeah that makes a full SSD pool for database storage reasonable. you can get 6 of these for $2,700 providing a bit over 2.5TB. I'm sure someone that knows SSDs better and make a better recommendation but just to give you an idea.

You should have no issues doing this for under $10k and that should beat out any big name vendor solution.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
....I sure wish FreeNAS did not strip out fibre channel support...
Perhaps you're looking for more masking abilities, but you *can use FC with FreeNAS, you just have to be willing to deal with the limited features. https://forums.freenas.org/index.php?threads/freenas-9-3-fc-fibre-channel-target-mode-das-san.27653/

If it's helpful for OP, I'm using FreeNAS for a shared storage solution for a 2 node (VMware) cluster, hand full of VMs that are under utalized, and it's fibre channel connected via dual fabric 4GB.

If it's helpful, here's numbers I just ran for funzies that might show you can easiy reach you performance objective with the right equipment and config:

upload_2018-4-16_18-47-28.png

upload_2018-4-16_18-48-44.png

upload_2018-4-16_18-49-0.png


This data was obtained using VMware's IO Analyzer appliance, running on one of the hosts.

3G is a 24 disk 146GB SAS enclosure, setup in mirrored pairs (12 of them)
6G is a 12 disk 300GB SAS enclosure, setup in mirrored pairs (6 of them)
SATA1 is on a 12 disk 2TB SATA enclosure, setup in mirrored pairs (6 of them), using zvol extent
SATA2 is on a 12 disk 2TB SATA enclosure, setup in mirrored pairs (6 of them), using file based extent

I use two enterprise SSD devices for a striped SLOG and two standard SSDs for cache devices
They were setup as shared devices (meaning each pool used a partition on them). Don't do this, it was just for testing. It worked perfect with no noticed side effects of extra latency from sharing, but since you mentioned being a business, use dedicated log/cache devices for each pool

*Your milage/performance will vary, hopefully you can just see some differences between disk types, and that caching makes SATA an option under certain circumstances. Nimble storage is a very good vendor doing this.

EDIT: In case it matters for your own testing, I ran 60 minute tests and my system needed about 15 minutes of running the test to completely get the cache utilized. Then I got 45 minutes of stable performance data. My normal workload was running in the back ground, and I ran each test separately rather than running a bunch of them at once. They were the included tests VMware included in the appliance GUI
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Yeah on the fibre channel.. TrueNAS Doc - iSCSI talks about setting up fibre channel in the GUI and being supported. The support is there its just not "enabled" for FreeNAS. It irritates me because 80%+ of the people that would use it would never buy a TrueNAS system anyway. I would think they would do better selling support to people using the TrueNAS software than limiting themselves to the hardware they sell.

/end rant
 
Last edited:
Status
Not open for further replies.
Top