VM Best Practice for spinning rust robustness

Status
Not open for further replies.

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
I am starting to hone my skills and knowledge of freenas and how things work. Something I didn't initially consider at all was VM usage. My question here is less about performance than about my spinning rust pools longevity specifically from unnecessary wear.

I am running a windows VM to do some things I couldn't get to work the way I wanted in jails, specifically syncthing, but I have a few other things in mind I may want to include. Suffice it to say, I do plan on playing with VM's even if rudimentary for the lifetime of the server, with these VM's running 24/7/365.

This means there has to be continuous IO to the drives even if no users and the ZFS system isn't actively doing anything simply because there is a windows VM running on the pool. Sure, drives are made to be used, I know of plenty and have had plenty of drives in windows machines work for 5+ years while running 24/7, but wear is wear and wear is undesirable. Would it make sense to throw an SSD into the system specifically for my VM's to live on? I am less concerned about raw performance as I honestly don't do anything heavy enough on them to warrant the speed advantages, I am purely looking at reducing wear and potentially electricity as the spinning rust usage should in theory be reduced.

I do have a 128 SSD laying around, I actually have 2 identical 840 Evo 128's, one is currently in a windows system... If it is inadvisable to use a single SSD, I potentially could clone the windows SSD to a new SSD (they are so cheap these days, I wouldn't mind doing this) and set up the SSD's in a mirror, although I would backup the dataset on the SSD to the pool itself (can you use ZFS replication within the same insatce of freenas? I believe the SSD would be in its own pool? Or, would it be its own vdev? Now I am unsure of the correct terminology here it seems, I know vdevs make up pools, and drives make up vdevs, but I am not sure what this setup would look like) and would use replication to back it up to an offsite backup server I am setting up as we speak. Also, the VM's are not mission critical, if it goes down and my VM's are offlined for a few days, its really not a huge problem at least at this stage of my FreeNAS adventure. IF that changes, I can always clone the data, kill the SSD vdev (or pool..?) and set up a mirrored array.

Any advice would be great!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I do have a 128 SSD laying around, I actually have 2 identical 840 Evo 128's,
I did this when I first tried VMs on FreeNAS using the PHP VirtualBox. It worked fine for me but your mileage may vary because they are using a different virtualization layer now. Still, I would expect it to work.
(can you use ZFS replication within the same insatce of freenas?
Absolutely.
I believe the SSD would be in its own pool?
That is correct, because the level of redundancy is different.
Also, the VM's are not mission critical, if it goes down and my VM's are offlined for a few days, its really not a huge problem at least at this stage of my FreeNAS adventure.
In that case, you might want to use the SSDs in a strip instead of a mirror to get the extra speed and capacity although I am not sure how much benefit that will have in practice.
Any advice would be great!
It is a learning adventure, but I think you are getting a good plan together. You will need to do some reading on how to do a snapshot and a zfs send and receive. Here is a link to a little guide on that:
http://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/
The process goes (roughly) like this, make a snapshot of the dataset to be backed-up, do a send and that send can be directed to a file in the other pool. Then if you need to restore the backup later (any time) you can send it back. The linked article gives more details and you will want to experiment to make sure you understand how it works.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
In that case, you might want to use the SSDs in a strip instead of a mirror to get the extra speed and capacity although I am not sure how much benefit that will have in practice.

It is a learning adventure, but I think you are getting a good plan together. You will need to do some reading on how to do a snapshot and a zfs send and receive. Here is a link to a little guide on that:
http://blog.fosketts.net/2016/08/18/migrating-data-zfs-send-receive/
The process goes (roughly) like this, make a snapshot of the dataset to be backed-up, do a send and that send can be directed to a file in the other pool. Then if you need to restore the backup later (any time) you can send it back. The linked article gives more details and you will want to experiment to make sure you understand how it works.

I wouldn't do stripe simply because the risk of loss is an extra p[potential issue I don't need to induce. Is it worth having mirror, maybe not, but is it worth putting them in a strip setup, also probably not. More likely buy a 256 in that case if space was an issue.

I have used ZFS replication, but I will take a look at this as well, thanks.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Keep in mind that wear is wear - spinning rust or SSD. Windows just likes to crunch on the drives over time (reads and writes). Whether that's spinning rust or an SSD, the wear still occurs... in fact, it could almost be worse on an SSD, depending on the drive's endurance.

If it helps, I've got more than 40 VMs (including 10 or so Windows) running on a 12-drive array. Those drives have been spinning about 10K hours (~405 days) and haven't failed yet. All HGST NAS drives.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Keep in mind that wear is wear - spinning rust or SSD. Windows just likes to crunch on the drives over time (reads and writes). Whether that's spinning rust or an SSD, the wear still occurs... in fact, it could almost be worse on an SSD, depending on the drive's endurance.

If it helps, I've got more than 40 VMs (including 10 or so Windows) running on a 12-drive array. Those drives have been spinning about 10K hours (~405 days) and haven't failed yet. All HGST NAS drives.
That is a pretty decent number of VMs. Is that using FreeNAS or ESXi?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
That is a pretty decent number of VMs. Is that using FreeNAS or ESXi?
Three ESXi hosts, running NFS over 10gig-e back to FreeNAS. I'm not a big believer in multi-task devices... I want my FN box to be a very good storage box - not a storage plus VMs plus etc. etc. etc. box.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Keep in mind that wear is wear - spinning rust or SSD. Windows just likes to crunch on the drives over time (reads and writes). Whether that's spinning rust or an SSD, the wear still occurs... in fact, it could almost be worse on an SSD, depending on the drive's endurance.

If it helps, I've got more than 40 VMs (including 10 or so Windows) running on a 12-drive array. Those drives have been spinning about 10K hours (~405 days) and haven't failed yet. All HGST NAS drives.

This is true. Wear is wear, I have no allusions to that. But I would think “simple” OS random reads and writes would be handled better by NAND then spinning platters, also, at this point in the game SSD’s especially of the 128-256 variety are almost throw away cheap, easily half the price of a 4TB red which is what the array is built with. I just assumed since the NAS gets so little usage (I am the only user, and I’ll be honest it was mostly built just for fun and for me to learn on and play with) the drives shouldn’t be hit with much activity at all beyond normal ZFS operation, some automation in the form of automatic SSH file replication from another server and then some plex activity, I figure a Windows VM would be putting orders of magnitude more wear on them then otherwise would occur.

Maybe I am just trying to justify to myself that putting another SSD into it is a good idea lol.


Sent from my iPhone using Tapatalk
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
To be perfectly honest, I think you're just worried too much about it. You'll be fine either way. BTW, my VMs aren't simple... I'm running stuff like SQL Server, Splunk, Zimbra, and other stuff that generates quite a lot of reads and writes.

The big advantage to SSDs, if you only have small storage requirements, is performance. A pair of mirrored SSDs will absolutely thrash my 12-drive array from a performance (both bandwidth and IOPS) standpoint. But, I've got roughly 9TB of usable space (12x3TB, 50% loss for mirrors, another 50% loss as you can't exceed 50% pool utilization without performance going to hell). Buying 36TB of SSD would be cost-prohibitive.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
To be perfectly honest, I think you're just worried too much about it. You'll be fine either way. BTW, my VMs aren't simple... I'm running stuff like SQL Server, Splunk, Zimbra, and other stuff that generates quite a lot of reads and writes.

The big advantage to SSDs, if you only have small storage requirements, is performance. A pair of mirrored SSDs will absolutely thrash my 12-drive array from a performance (both bandwidth and IOPS) standpoint. But, I've got roughly 9TB of usable space (12x3TB, 50% loss for mirrors, another 50% loss as you can't exceed 50% pool utilization without performance going to hell). Buying 36TB of SSD would be cost-prohibitive.

Yea. Yours and my uses are very different. In my eyes, “pool performance going to hell” is not saturating gigabit with sequential read and writes lol. I don’t foresee this being an issue with my setup, even if the VM’s where busy and living on the 10x4TB pool. But, like I did say, I have the SSD literally sitting on my desk not being used. Might as well utilize it I suppose.


Sent from my iPhone using Tapatalk
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
To be perfectly honest, I think you're just worried too much about it.

I'm going to echo this sentiment. The most important factor that impacts wear on hard drives is the operating conditions. Things like temperature and vibration are going to have a much bigger impact on your driver's longevity than any increased wear from extensive I/O. Furthermore, wear from starting and stopping will also have a much bigger impact.

SSDs are more sensitive to I/O, largely because NAND has a limited write life. However, from a practical perspective (and especially thanks to technology like wear leveling), hitting that ceiling will never happen because it require continuous extensive writes over many years. I'd be extremely surprised if you were moving around that much data in a home environment.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
I'm going to echo this sentiment. The most important factor that impacts wear on hard drives is the operating conditions. Things like temperature and vibration are going to have a much bigger impact on your driver's longevity than any increased wear from extensive I/O. Furthermore, wear from starting and stopping will also have a much bigger impact.

SSDs are more sensitive to I/O, largely because NAND has a limited write life. However, from a practical perspective (and especially thanks to technology like wear leveling), hitting that ceiling will never happen because it require continuous extensive writes over many years. I'd be extremely surprised if you were moving around that much data in a home environment.

Yea, I don’t ever plan to kill NAND from writing past it’s designed capacity. Even my spinning rust pool, from a viewpoint of a datacenter sysadmin, it would be considered cold storage lol. Powered on cold storage haha.

I do wish the case I have had provisions for vibrations reduction, but unfortunately it doesn’t... I suppose you are likely correct that this will be my biggest issue in the long run since temps are a non factor, usually kept in the high 20’s.


Sent from my iPhone using Tapatalk
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
temps are a non factor, usually kept in the high 20’s.
Running spinning drives too cold can actually be as bad for their life as running too hot. There's a study produced by Google looking at their drive failures, with detailed data about temperatures correlated to those failures. The data definitely shows a bathtub curve of failure with temperature: too hot and too cold, and you increase the failure rates.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Running spinning drives too cold can actually be as bad for their life as running too hot. There's a study produced by Google looking at their drive failures, with detailed data about temperatures correlated to those failures. The data definitely shows a bathtub curve of failure with temperature: too hot and too cold, and you increase the failure rates.

Interesting. What was “to cold”? Mine just run cool because they rarely are doing anything. Ambient room temp ranges from 60f to 80f winter when I’m not there to summer im not there. Have yet to have the system through summer though so don’t know what temps I will see in that scenario.


Sent from my iPhone using Tapatalk
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
As I recall, the ideal temperature range was between 35 and 40. I would suggest searching for the study and reviewing its conclusions yourself.
 
Status
Not open for further replies.
Top