Using a VMware virtual disk as L2ARC read cache?

Status
Not open for further replies.

ujjain

Dabbler
Joined
Apr 15, 2017
Messages
45
I have the following server:
  • T330
  • Intel Xeon 1240 v6
  • 32GB (soon 64gb) ECC
  • 1x4TB via H330 controller (soon 7 more slots, so might go for a nice 6x4TB raid 6)
I would like to know if it would be a bad thing to use create a virtual disk on the the Samsung Pro 960 NVME and use that as read-caching (L2ARC) in FreeNAS. Currently my biggest problem is a lack of PCI-X slots that are too small or too big. I got 2x WD Black PCIe SSD 256GB on Amaszon Prime Day, but will have to send them back.

The only thing I need is extra memory and the read-caching SSD to be very fast in IOPS and read-speed, right? So I could just use create a VMDK on the 960 Pro?
 

Artion

Patron
Joined
Feb 12, 2016
Messages
331
would be a bad thing to use create a virtual disk on the the Samsung Pro 960 NVME and use that as read-caching (L2ARC) in FreeNAS

Are you using a virtualised FN installation? Otherwise you can't create VMDK on a bare on metal FN installation. If this the case you can select the SSD as you L2ARC device from the Volumes tab in the FN GUI.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I've been doing exactly this with an ESXi install until recently. Only difference is my datastore was a Samsung 960 Evo ;)

Just create a disk (thin if you want, independant if you don't want it included in backups) and then add it as per manual as l2arc.

Although, using virtual disks is generally a no no, an l2arc doesn't matter, so it shouldn't matter.

I passed through my SATA controller for the HDs and a PCIe NVME (with PLP) for the Slog.

I also added another vdisk for swap. And removed the swap from the HDs.

Recently, I've setup the swap/slog and l2arc on the PCIe NVMe slog drive as it's a p3700 and doesn't seem to have an issue with doing double duty.
 

ujjain

Dabbler
Joined
Apr 15, 2017
Messages
45
Are you using a virtualised FN installation? Otherwise you can't create VMDK on a bare on metal FN installation. If this the case you can select the SSD as you L2ARC device from the Volumes tab in the FN GUI.
I've installed FreeNAS on a 20GB VMDK yes. Then using passthrough to give FreeNAS access through the hard disks. I created another 2 vmdk's to play around with L2ARC and ZIL. Wondering whether the L2ARC as a VMDK can be a permanent solution.

I am not sure about the ZIL/SLOG yet as I don't have a lot of PCI-slots to add SSD's in raid. I was actually thinking about replacing the current M.2 NVMe PCIe adapter by one that supports 2 SSD's and use them for both L2ARC and ZIL, but I'm not sure yet as I see everybody using dedicated SLOG arrays instead.
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
Hi,

I am just wondering - what kind of results did you have virtualizing your SLOG? I am thinking of trying this with a 1TB 4x NVMe drive I'm using as a datastore... thanks!
 

jp83

Dabbler
Joined
Mar 31, 2017
Messages
23
I have a Fusion IO drive on my ESXi all-in-one. I had experimented with a vmdk for slog awhile back, but it didn't seem to help, so took it out. I was reviewing this again lately and tried dd to it fom within freenas (the virtual disk is still there) and think I only got like ~100MB write, which didn't seem right. Any other notes on getting full speed through virtualization?
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
It's interesting as a point of experimentation, but I have to be honest - I haven't tried it because my FreeNAS VM is really just a file storage server and for that it works fine.

I was looking at using FreeNas as a NFS datastore for ESXi, but because of the speed limitations of NFS I started using KVM on another server because I can use ZFS on Linux and manage datastores at the OS level, rather than creating a datastore VM and connect ESXi with NFS (speeds are abysmal).

Edit: Also ujjain makes a good point - slog usage is pretty heavy and if you don't have a RAID setup for it, you might risk burning out your ESXi datastore (!)
 
Status
Not open for further replies.
Top