SOLVED Hardware Recommendations for FreeNAS and ESXi

Status
Not open for further replies.

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
In which case you may not need an SSD SLOG at all? You could try repurposing it as a dedicated datastore for ESXi and put the vmdks on that thing?
My testing of ESXi datastores with/without a SLOG device showed that performance is abysmal without one - unless you turn off synchronous writes altogether, which is bad juju of a different sort!

@brando56894, I hope you'll reconfigure your system - with a dedicated SLOG and the ESXi datastore on separate devices - and let us know the results. "Inquiring minds want to know!"
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
It's actually really easy to do in ESXi, remember this is all virtualized. It took me a day or two to figure out since I'm brand new to ESXi but there is no networking required. All I did was make my S3700 a datastore then I created 5 16 GB sparse vHDD for each of my VMs, then I made 2 8 GB flat vHDDs to use as the SLOGs for each dataset and attached those two empty images to to my FreeNAS VM. Once it was booted up it showed 2 empty drives which I then added as a SLOG to each one of my pools. I also have 2 L2ARCs on a 128 GB Samsung 850 Evo the same way. They aren't really used that often yet since I have yet to have the VM up for more than 72 hours straight and most of the metadata is in the 46 GB of RAM I have allotted to the VM.

So far I haven't noticed any performance issues since I moved FreeNAS to the Raptor and I have everything setup and working as it should.

Here's the usage of both SLOGs, I'm not sure which one is attached to which pool though. I'm probably going to keep them unless it adversely affects the performance because it looks like it's beneficial to have them since they are being used.


Sent from my AOSP on dragon using Tapatalk
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
It's actually really easy to do in ESXi, remember this is all virtualized. It took me a day or two to figure out since I'm brand new to ESXi but there is no networking required. All I did was make my S3700 a datastore then I created 5 16 GB sparse vHDD for each of my VMs, then I made 2 8 GB flat vHDDs to use as the SLOGs for each dataset and attached those two empty images to to my FreeNAS VM. Once it was booted up it showed 2 empty drives which I then added as a SLOG to each one of my pools. I also have 2 L2ARCs on a 128 GB Samsung 850 Evo the same way. They aren't really used that often yet since I have yet to have the VM up for more than 72 hours straight and most of the metadata is in the 46 GB of RAM I have allotted to the VM.

So far I haven't noticed any performance issues since I moved FreeNAS to the Raptor and I have everything setup and working as it should.

Here's the usage of both SLOGs, I'm not sure which one is attached to which pool though. I'm probably going to keep them unless it adversely affects the performance because it looks like it's beneficial to have them since they are being used.


Sent from my AOSP on dragon using Tapatalk
Again, the SLOG should be a dedicated device, not a virtual disk on an ESXi datastore. Sounds like you're running FreeNAS as a VM and giving it VMDK images as 'disks', instead of direct access to the actual disks. This is not a good configuration for any kind of real-world usage.

The only time it makes sense to install FreeNAS as a VM with virtual disks, as you have done, is if you're just interested in 'kicking the tires' and trying it out. I have FreeNAS installed in a VirtualBox VM for just this purpose. But again, you don't want to use this kind of setup for production data.

Read @joeschmuck's "My Dream System (I think)" thread for his experiences in virtualizing FreeNAS. That thread also contains a link to @Benjamin Bryan's excellent tutorial on the same subject.

The upshot is that you need to give the FreeNAS VM direct access to its hard drives. And the FreeNAS VM itself must be on a local datastore, not on the disks assigned to it.

My main FreeNAS system is virtualized (see "Show: my systems" below). It boots VMware ESXi from a USB drive, with two small SSDs attached to the motherboard's SATA ports as the local datastore for the FreeNAS VM using the mirrored installation feature. I pass the motherboard's LSI 2308 HBA through to the FreeNAS VM using VT-d. FreeNAS is configured with a RAIDZ2 pool of 7 HDDs with an Intel S3700 SLOG device, all 8 of these connected to the LSI 2308.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Sounds like you're running FreeNAS as a VM and giving it VMDK images as 'disks', instead of direct access to the actual disks. This is not a good configuration for any kind of real-world usage.

In case you forgot what this thread was originally about let me remind you ;) The only things that are virtualized are the FreeNAS boot drive, the L2ARCs and the SLOGs, FreeNAS has direct access to both of my pools via VT-d. It seems that there isn't any way to passthrough individual SATA ports, so I went with this option. I'd like to get a 4 port HBA so that I could connect 16 drives (currently only have 13, 8 of which are connected to my HBA) and pass them all through, but I'm currently unemployed and FreeNAS 10 will hopefully be out in 6 months so I won't need to Virtualize FreeNAS 9.10 any more. I do have an open SATA port on the motherboard that I'd like to connect another SSD to and move the vHDDs from the S3700 to that, but that still leaves the "problem" of a virtualized SLOG since I have no way of giving FreeNAS direct access to it.

There are no longer any issues in FreeNAS nor in my VMs, and performance has been fine in all VMs. None of the VMs are under heavy load and nothing is starved for resources, so I don't really see how my setup is "bad", it may not be the ideal setup but if it has no adverse affects on anything how is it bad?

Simply saying that the vHDDs and the SLOGs shouldn't be on the same device isn't enough to convince me since it's current performance shows otherwise. Is there way a fool-proof way that will give hard results? If it really is a big issue I can just simply remove them and put the logs back in the pool...which FreeNAS has direct access to.

The only time it makes sense to install FreeNAS as a VM with virtual disks, as you have done, is if you're just interested in 'kicking the tires' and trying it out. I have FreeNAS installed in a VirtualBox VM for just this purpose. But again, you don't want to use this kind of setup for production data.

From the looks of your signature it looks like you have two FreeNAS systems virtualized with ESXi and not VirtualBox, one which is clearly marked as a test system and another one which looks like it is for production (considering it's not marked 'test'), so I'm slightly confused by your statement.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Simply saying that the vHDDs and the SLOGs shouldn't be on the same device isn't enough to convince me since it's current performance shows otherwise. Is there way a fool-proof way that will give hard results? If it really is a big issue I can just simply remove them and put the logs back in the pool...which FreeNAS has direct access to.
"Device contention" is a good 2-word explanation why you shouldn't do so, but you're probably not stressing the server enough for this to become an issue. I won't argue it: "You can lead a horse to water, but you can't make him drink." :)
From the looks of your signature it looks like you have two FreeNAS systems virtualized, one which is clearly marked as a test system and another one which looks like it is for production (considering it's not marked 'test'), so I'm slightly confused by your statement.
Sorry about any confusion. Yes, I have two FreeNAS-on-ESXi Supermicro servers, one each for production and testing. I don't list the VirtualBox FreeNAS VM in my signature because it's not a server; it's just a VM on my desktop system that I only run when I want to play with the nightly or beta releases, for testing instances I've built from source, or whatever.
 
Last edited:

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I one of those people that wants hard proof for stuff, simply saying "it's bad" isn't good enough. If I experience catastrophic data loss because I didn't believe you, it's my ass hahaha I'm more than willing to change my setup but I just need proof it's a poor setup considering this is about the third time I've reconfigured everything in about the past week (easy 50 hours spent) and everything is working now.
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
simply saying "it's bad" isn't good enough
From a data safety perspective, if you are running any SLOG device without it's own built in power protection (whether it's a physical SSD or in this case a virtual disk), you might as well just disable sync writes and cross your fingers. A SLOG spends all it's time writing data. The only time it's read is during a recovery after a crash, power failure, etc. If you aren't going to guarantee the write, then why bother using it in the first place.

From a performance perspective, SLOG is write intensive and L2ARC is read intensive. by placing them on the same device, you could potentially starve one to serve the other. You won't lose data, but you could suffer a performance penalty. Of course, the performance issue for your environment might be negligible. Only you and your testing can tell if that's the case.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
There are a fair number of people who don't understand good practices using ESXi, or virtualization in general. If you have the proper discipline and hardware resources then FreeNAS running on a VM is fine. With that said, we don't recommend most people to use FreeNAS in a VM except for testing it out. The forums don't support virtualization, meaning that if you have a problem, you may be on your own and bug reports may never get answered.

The good thing about having my drives in pass-through is I can still connect them to any machine, install FreeNAS on a USB flash drive, restore my configuration file, and all my data is there like I was on a bare metal machine the entire time.

My Off Topic thread for my dream system is a bit long to read because we get off topic periodically but it does contain a lot of good information. There was a lot of help from people and will likely help some people out. I am now hosting iSCSI on my FreeNAS VM for ESXi VMs. Needless to say FreeNAS must be fully up and running before any VM stored there can be started. So far it's just a test and it all appears to be working fine. I'm running a few Windows 7 VMs, Ubuntu, and FreeNAS 10 Beta. Using the VX3 NIC makes very fast work of things.

Well time to get back to rebuilding my headlight assemblies. Almost ready to mount them in the truck and align them but I have the new wiring harness to install first.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
From a data safety perspective, if you are running any SLOG device without it's own built in power protection (whether it's a physical SSD or in this case a virtual disk), you might as well just disable sync writes and cross your fingers. A SLOG spends all it's time writing data. The only time it's read is during a recovery after a crash, power failure, etc. If you aren't going to guarantee the write, then why bother using it in the first place.

IIRC the S3700 is battery backed and was the recommended option for an SLOG around here, so from that perspective I'm good. I also figured putting the VMs on there, instead of the other SSD that I have in there was a good choice for the same reason.

From a performance perspective, SLOG is write intensive and L2ARC is read intensive. by placing them on the same device, you could potentially starve one to serve the other. You won't lose data, but you could suffer a performance penalty. Of course, the performance issue for your environment might be negligible. Only you and your testing can tell if that's the case.


My SLOG and L2ARC are on two different physical drives/datastores, but I understand what you are saying about resource starvation, which is why I've been monitoring everything. Is there a more accurate way to monitor SLOG usage than the chart in the GUI?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
IIRC the S3700 is battery backed and was the recommended option for an SLOG around here, so from that perspective I'm good. I also figured putting the VMs on there, instead of the other SSD that I have in there was a good choice for the same reason.
The S3700 does have built in power protection, but I thought you were running the SLOG as a vmdk on that device, not directly accessing it via pass-through.

My SLOG and L2ARC are on two different physical drives/datastores, but I understand what you are saying about resource starvation,
Ah, yes, my bad, I confused the VM's running from the same device with the L2ARC. The contention/performance concern is similar though. You can use "zilstat" to monitor the ZIL and SLOG. The SLOG will get written over and over, since it only needs to keep the last 5-10 seconds of writes. Low latency is key with the SLOG. The more latency that gets added, the closer to the pool performance it gets and the less useful the SLOG becomes.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Ah, yes, my bad, I confused the VM's running from the same device with the L2ARC. The contention/performance concern is similar though. You can use "zilstat" to monitor the ZIL and SLOG. The SLOG will get written over and over, since it only needs to keep the last 5-10 seconds of writes. Low latency is key with the SLOG. The more latency that gets added, the closer to the pool performance it gets and the less useful the SLOG becomes.
Indeed, which is why it's a horrible idea to store virtual machine images on the SLOG device, as @brando56894 is doing. I/O contention between the VMs and the SLOG will destroy any benefit it could otherwise provide.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Indeed, which is why it's a horrible idea to store virtual machine images on the SLOG device, as @brando56894 is doing. I/O contention between the VMs and the SLOG will destroy any benefit it could otherwise provide.
Agreed!
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Ok, some convincing has been done and I guess I could rip out one of the raptors which currently isn't in use and replace it a 60 GB SSD that I could put the VMs on. :p Would it still even be beneficial since FreeNAS doesn't have raw access to the device? I see it this way: my SSD (even loaded with I/O) is still faster than my pool so it makes more sense to store it on the faster medium. If it's true what you guys are saying it would just be quicker and result in far less downtime just to simply remove the ZILs from the S3700 and place it back in the pool.

The S3700 does have built in power protection, but I thought you were running the SLOG as a vmdk on that device, not directly accessing it via pass-through.

You are correct, I am, but since the battery/power protection is in the device itself doesn't it still function as intended whether or not data is written directly to the drive or whether it is written to the datastore? After all it's still data, I guess where it differs is you have to make sure that the data is correctly written inside of the VMDK before it gets pushed to the pool as opposed to if it had raw access it would already be on the drive, correct? I guess I'm still slightly confused regarding the necessity of a battery/power protection in the drive since it is non-volatile memory unlike RAM, which obviously loses everything once it loses current.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@brando56894
Exactly what is your FreeNAS system (VM) being used for? I ask this question all the time when someone is using an SLOG or L2ARC.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Good question about power. I forgot the details but I remember it made sense and relying on a vmdk seems to be risky. But I could be mistaken.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
@brando56894
Exactly what is your FreeNAS system (VM) being used for? I ask this question all the time when someone is using an SLOG or L2ARC.
All my Linux VMs are connected to my pools via NFS.

My Storage pool (largest one, 3 sets of 6 TB HGST mirrors) has the main duty of storing all my HD and SD movies (about 200, about 1.3 TB) and TV shows (about 8500 episodes, 5 TB) and sharing them out to two plex servers (locally in VMs, shared via NFS) and two Kodi clients (via samba). It's secondary task is to receive and store all the files that are coming from the Usenet VM (SABnzbd, CouchPotato, SickRage and Transmission [which currently has 1.6 TB of seeding torrents, and aren't symlinked]). It also stores the metadata from Plex (only about 10 GB total). It's reading and writing data all the time.

My SafeKeeping pool (mirrored set of 1 TB WD Red Drives) generally has a lot of WORM data on it, such as a dataset for random windows downloads that I don't want to find again, a dataset for config file backups from Linux (via BackupNinja which just does tar ball backups), a dataset for windows game installers downloaded via Steam (about 300 GB) and a dataset for thousands of personal pictures and movies.

I decided to add a ZIL to the SafeKeeping pool just for the hell of it a day or so ago, even though I don't really need it since there aren't really many write to it and just a L2ARC would definitely be more beneficial.

Edit 1: I just copied a 36 GB file from inside my usenet VM from an NFS share that exists on storage to another NFS share which exists on storage and the average transfer rate was about 85 MB/sec. Definitely nowhere near optimal, but not absolutely horrible. The SLOG was definitely in use since I was watching it with zilstat. Going to try a few more tests without the SLOGs and then with the VMs on a different SSD.

Edit 2: Just removed both SLOGs from the pools (didn't shutdown and remove the actual devices) and did the same test again and got the same results. I'm using rsync to measure the bandwidth and copy the files. I know it's not as good as using dd on FreeNAS itself, but I don't feel that accurately represents my day to do usage.
 
Last edited:

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Edit 2: Just removed both SLOGs from the pools (didn't shutdown and remove the actual devices) and did the same test again and got the same results. I'm using rsync to measure the bandwidth and copy the files. I know it's not as good as using dd on FreeNAS itself, but I don't feel that accurately represents my day to do usage.

That's what I'd expect for your use case as stated above. i.e. I don't see an SLOG really benefiting you. (Confirmed by your testing.) Simpler is better, if it's not helping, don't use it. :)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
That's what I'd expect for your use case as stated above. i.e. I don't see an SLOG really benefiting you. (Confirmed by your testing.) Simpler is better, if it's not helping, don't use it. :)
Makes sense. He'd probably gain more benefit by expanding his RAM to 128GB than by using an L2ARC, too.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Makes sense. He'd probably gain more benefit by expanding his RAM to 128GB than by using an L2ARC, too.
Captain Obvious to the rescue! Hahaha I would love to throw another 64 GB of RAM in it (and I totally intend to) but I don't have the $400+ required to do so but I did have a 120 GB SSD around, once again any good SSD is better than an HDD.

Sent from my AOSP on dragon using Tapatalk
 
Status
Not open for further replies.
Top