Currently on Xpenology, thinking of FreeNAS

Status
Not open for further replies.

Benoire

Dabbler
Joined
Mar 3, 2016
Messages
19
Hi

I'm thinking of change from my current storage system setup. For reference, I'll include my full server setups as this will set the context. Apologies, I think this might be a long post!

I have a 3 node vSphere cluster with vCenter and HA/DRS/vMotion enabled. I utilise Starwind vSAN (free version) with directIO enabled to give each Windows VM local storage which is then shared via iSCSI to all 3 machines. The vSAN is a mirrored array setup to ensure that I can keep VMs running in the case of a single host failure.

On one of these hosts I presently run an xpenology VM build using DSM 6.1, with 3 vCores and 6GB ram assigned. This host is an X8DTE-f system with quad core westmere xeon and 32GB DDR3 rdimms running in a 16 bay SM chassis (12 assigned to the Xpenology VM, 4 to the vSAN VM) and 2 LSI 9211-8i cards in passthrough. The Xpenology system is purely used as general house storage, VM backups and stuff. It uses docker to then backup the storage array to crashplan.

The storage is setup as SHR-2 (2 disk parity, hybrid raid) using 5 disks presently, 3 WD Red 3TBs and 2 enterprise Seagate 2TB drives, with the drive format as BTRFS.

I'm always concerned Xpenology/DSM may just fail, an update may kill everything given that its unsupported and Synology like to change security implementations every time they update. I'm also concerned about BTRFS in general and given DSM uses MDADM rather than the BTRFS raid setup, it probably won't really protect from bitrot... So, I keep looking at FreeNAS and wondering if I can really use it. I'll make it clear here that I do not intend to run FreeNAS as VM storage for the vSphere hosts, the VMs are handled through vSAN... FreeNAS would be replacing Xpenology as general storage.

So I have a number of themes I want to ask about!

1) Drive setup / vdevs etc.

As mentioned above I use a hybrid raid to allow for mixed drive usage.. My understanding is that FreeNAS doesn't support this approach, is this ever going to come or just wishful thinking?

As I want to keep drive replacement costs to a minimum as I can't simply afford to replace 12 drives everytime I want to increase capacity, and you can't expand a vdev once created, I was thinking of using RaidZ1 with 4 drives per vDev, which would allow me to have up to 3 vDevs to play and expand storage as necessary. Does this make sense? How much stress would a RaidZ1 be under when rebuilding from a drive failure and what is the consensus around risk of vdev failing as I understand once a vdev fails, the Zpool is also gone?

In simple terms:

vDev1 (RaidZ1) vDev2 (RaidZ1) vDev3 (RaidZ1)
Drive 1 Drive 5 Drive 9
Drive 2 Drive 6 Drive 10
Drive 3 Drive 7 Drive 11
Drive 4 Drive 8 Drive 12

I would get 3 disk parity across 3 vdevs?

2) Apps etc.

I currently have installed the following:

Hyper backup - backs up docker config and containers to my onedrive so that if I get failure of the machine I can get these dockers backup quickly and grab my data from crashplan.
Docker - Crashplan, TVheadend - as described by their names!
Download Station - very good download program, bittorrent etc.
Emby/Plex Server - no description needed!

What does FreeNAS have in the way of these? Can I use Docker as I am familiar with this? Is there something equivalent to download station?

I've got Westmere CPUs so these have more virtualisation extensions that the Nahalem chips, so presume no issues running docker?

3) FreeNAS as a VM with the same config as Xpenology

As I utilise an alternative way of providing HA access to VMs, my purpose around FreeNAS is the same as Xpenology... General storage but this time in a supported platform with better data protection... Are there any issues with running FreeNAS as a VM? Can I run jails and docker images with FreeNAS as a VM? Anything I need to be aware off?

I'll probably end up migrating from VM to baremetal but that would happen once I move the servers to more low powered units.

Thanks for your time in reading this far too long post and possibly ill-formated too!

Any thoughts or comments are really appreciated.

Cheers

Chris
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You have a complicated question here and the people that can really dig into this with you will probably have to get home from work and have a cool beverage before they give you answers to more of your questions. I can't really dig into it much myself, but I am on my lunch so I will touch on one thing quickly.
The zpool, in your illustration, is spanned across three vdevs (virtual devices) and each vdev has its own parity / checksum. It doesn't exactly work this way, but you could think of the zpool as a RAID-0 that spans across three RAID-5 arrays. If one of the vdevs fails, you loose the entire pool. It is for this reason that the recommendation of the community has been to use RAIDz2 with drives larger than 1TB due to the possibility of a second drive failure taking the vdev out while the vdev is already in a degraded state. Due to the nature of your configuration, running VMs, you probably need higher IOPS, you might want to consider using 3way mirrors, but that may not give you enough storage space. If I were in your situation, I would seriously consider a chassis that could accommodate more drives. More drives gives more fault tolerance but there is a cost to everything.
Keep in mind that more vdevs increases IOPS.
If you went to a 24bay chassis, you could use 4 vdevs of 6 drives each and have those 6 drives as RAIDz2 which can have a drive fail and still provide some redundancy.
You also stated that you can't grow a vdev and that is true only so far as the fact that you can't add more drives to it. You can replace one drive at a time until all the drives in a vdev are larger capacity and then the vdev will auto-expand to the new total capacity. I did this when I moved from 1TB drives to 2TB drives. It works nicely.
There are reasonably priced options for 24bay chassis if you are interested in that possibility.
Here is one that I would buy myself: http://www.ebay.com/itm/SuperMicro-...2-6Gbp-Expander-2xPSU/162597691498?rmvSB=true
I know that there are some folks on here that use FreeNAS from within ESXi however it is not officially supported. I think @Stux is one that might be able to comment on that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hyper backup - backs up docker config and containers to my onedrive so that if I get failure of the machine I can get these dockers backup quickly and grab my data from crashplan.
Docker - Crashplan, TVheadend - as described by their names!
Download Station - very good download program, bittorrent etc.
Emby/Plex Server - no description needed!

What does FreeNAS have in the way of these? Can I use Docker as I am familiar with this? Is there something equivalent to download station?
My answer to this can not be very comprehensive as I don't use them, but I do use Plex on my FreeNAS and it works nicely.
I will direct you to the list of Plugins in the documentation: http://doc.freenas.org/11/plugins.html
These plugins run in "Jails" and you can look into that and see if it answers your question.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Benoire

Dabbler
Joined
Mar 3, 2016
Messages
19
Thank you all so far. So, FreeNAS as a VM is ok, which I thought but wanted to ask; RaidZ1 isn't ideal even on small vDevs as the potential for failure during a rebuild is high - is it really such a problem with a 4 disk vdev?; and there are a number of apps/jails whcih can be used in a similar manner to my Synology setup?

I guess for general storage that is not powering live VMs, is 6GB of memory going to be enough? I realise the FreeNAS likes a lot if possible but I wondered in what use cases it required lots and whether as general storage, lower was ok?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thank you all so far. So, FreeNAS as a VM is ok, which I thought but wanted to ask; RaidZ1 isn't ideal even on small vDevs as the potential for failure during a rebuild is high - is it really such a problem with a 4 disk vdev?; and there are a number of apps/jails whcih can be used in a similar manner to my Synology setup?

I guess for general storage that is not powering live VMs, is 6GB of memory going to be enough? I realise the FreeNAS likes a lot if possible but I wondered in what use cases it required lots and whether as general storage, lower was ok?
There is a lot of documentation and experience that backs the suggestions that you find on the forum. Many of us, myself included, have done things in a way that didn't work out well and we are here to help others not make the same mistakes we made.
ZFS and disks: storage-disks-and-controllers
- some of the "rules" with regard to how to setup your disks are really more of a "best practices guide" that is based on an abundance of caution. I use 2TB disks for my storage at home and it takes 3 to 4 hours to resilver if I have to replace one. At work, we have some systems with 4TB disks that can take 8 to 10 hours to resilver and we have a system with 6 TB drives that I just had to replace a drive in it and it took over 36 hours. In a RAIDz1 pool, if you have to replace a drive (which is demanding on the other drives) and one of the 'good' drives decides to fail while you are rebuilding, the vdev is lost and if there are other vdevs in the pool, the whole pool is lost. In my situation at home, 3 to 4 hours of exposure to having no redundancy might be acceptable, but if it was half a day or more? It just depends on your willingness to accept the risk of loosing your data. You are the only one that can make the decision. I want you to have the information to make the choice that suits you because once you choose, and put data on it, you are pretty much stuck with it. You can't change a vdev once it is established except to replace the drives with larger drives. I have worked with hardware RAID controllers that would let you refactor the array with data already on the storage but ZFS isn't like that.
ZFS Primer
Memory: Have a look at the documentation ... it says 8 is the minimum and I run 16 in one NAS and 32 in the other. I would get more if my budget allowed. More memory is generally better, especially if you want to use the virtualization (Jails, Plugine, etc) that FreeNAS has to offer.

Take a look at the documentation I linked to and have you reviewed the Hardware Recommendations Guide
 
Status
Not open for further replies.
Top