"hyperconverged" build idea

Status
Not open for further replies.

vrod

Dabbler
Joined
Mar 14, 2016
Messages
39
Hi all,

Currently thinking about giving my home server a bump up in storage space (and speed as well). Currently I am running a "hyperconverged" system (ESXi 6.0U2 with FreeNAS VM) with the specs:

S2600CP4 w/ 2x Xeon E5-2640
256GB DDR3 ECC (50% reserved for FreeNAS VM)
12 x 2TB mixed drives (Seagate Barracuda LP / HGST Ultrastar) in a 2x Raidz2 pool (6 drives in each vdev)
Passthrough of onboard Intel SAS controller + additional LSI 9211 controller.

The ESXi host connects through NFS to the pool on the freenas VM. However I am thinking of going to iSCSI since many seem to recommend it. I run plex and some other services who connects through NFS to the freenas directly as well.

However due to lack of performance, nearing full capacity as well as tired of changing drives all the time (got these for free) I'm thinking of giving it a bump up.

- Completely new VM for a new freenas instance
- 8x WD Red Pro 6TB (all passed through the single LSI9211 controller)
- 2x Intel 750 800GB ssd's (in hand already)

Now to the questions:

- I've read various posts on NFS vs iSCSI for ESXi and Freenas. Some say NFS is better, some say iSCSI. The only cons I have with NFS so far is that my ESXi host at a random time stopped using the correct vmkernel to connect to the NFS storage. Several other people had this issue as well, so I would think iSCSI is better?
- Since I'll be doing an 8 disk pool, would mirror be best or is Raidz2 the way to go? I'm not too fond of losing 12TB of capacity for a little performance gain. The NAS is used for movies, backups and to host storage for a few VM's as well as my vSphere lab environment. I would like the cache to work the performance.
- I am looking into the possibility of splitting the 2 750 SSD's. These devices are so powerful that I would like to use them for both l2arc and zil if that's possible. Or is this just a bad idea?
- Could someone recommend a good UPS which is certified to work with FreeNAS?
- I have a possibility to make the system physical as well on another S1200BTL (xeon e3-1230) setup with 32GB ECC ram. Would it be better to go physical than virtual?

Thank you,
Chris
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
My setup isn't nearly as powerful as yours but I am also running ESXi 6 and FreeNAS 9.10. I have my m1015 HBA passed through to FreeNAS and I have my main pool of 6 drives (striped mirrors) connected to it, along with 64 GB of DDR4 and an 8 core Xeon D 1540. I haven't done any empirical tests to prove this, but I feel that my system is definitely slower than if I used FreeNAS natively. I also have been using NFS to connect my other VMs to FreeNAS, but ESXi doesn't use FreeNAS for storage. I attempted to do that with iSCSI but it made everything slow to a crawl since it was way too much IO, not to mention complicated the hell out of everything.

Myself I can't wait until FreeNAS 10 is released so I can ditch ESXi, I may actually do it next week if Beta 2 is up to snuff. I'd say make it physical since it will largely reduce headaches and will most likely help with performance.
 

vrod

Dabbler
Joined
Mar 14, 2016
Messages
39
Hey,

I did read your answer a while ago but life has been busy so haven't had the time to answer. I ended up with just going for 6x6TB WD Red's. I had some 800GB 750 ssd's (nvme) left over. Actually had tried to sell them but people wouldn't pay a buck for them for some reason. However I've turned them into a mirrored SLOG (20GB partition on both).

Currently the NAS has specs of 8 cores (2x4 core), 64GB of DDR3 and the NVMe's + M1015 flashed to IT mode controlling the drives. Boot storage is 2x30GB vmdk's mirrored onto 2 different SSD's/datastores.

I'm looking forward to FreeNAS 10 as well but I don't think I would ditch ESXi, it's a pretty solid hypervisor :). The array looks to work pretty damn good, the NVMe SLOG speeded up the NFS performance bigtime, but I'm looking to see if I could make e.g. a 50-60GB partition for L2ARC? Or would it be better to allocate more ram? I have 256GB in the hypervisor but I have a few other VM's running for labs and VCP training.

Would anyone suggest a good tool to test the true performance of an array? Before adding the NVMe's to SLOG I zeroed them completely. During the zero'ing they ran like 800~MB/s so I guess that the array write-wise should be able to perform around that (since many people here says the speed of the SLOG will determine the speed of the array). Another sidequestion: since I'm only using 20GB out of the 800 (742 to be precise), I guess the lifespan of the SSD's should be a lot longer right? :)

Cheers,
Chris
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Pretty nice setup! I actually switched to FreeNAS a few days before Beta 2 was released and I've really had no issues. ESXi is definitely stable, but the slow as "molasses in winter" GUI made me want to stab myself haha. I'm looking to get an NVME drive also since I have an M2 slot on my board but I don't know what I should use it for yet. I'm considering using it for either an L2ARC or a volume (pool) for my VMs, I'll probably use it for the latter since it will be a better use of space and 1 NVME drive will absolutely destroy my current pool write speed of 500 MB/sec and a read of 275 MB/sec. I currently have a 128 GB 850 Evo as my L2ARC but I can't tell how much it's currently being utilized since there looks to be no ARC monitoring tools available yet, other than the graph in the dashboard :-/ I also have space for another 64 GB of RAM in my system.

If I were you I would use one NVME drive for a datastore since you could utilize all of the space and all the VMs would benefit from it instead of just the pool within FreeNAS that you have the SLOG attached to, and use the other one for the SLOG on your pool (I don't think there's any benefit to having mirrored SLOGs). I'm pretty sure it's not possible to use one device for both an SLOG and an L2ARC, at least this was the case with FreeNAS 9.x, even though I have an SLOG too (Intel S700) 98% of the drive is never touched and I would love to use the extra space for something. The only way I see this being possible would be if you created two VMDKs in ESXi on one of the NVMEs and use one for an SLOG and one for an L2ARC. SLOGs are all about latency and adding an abstraction layer to it definitely reduces latency, but considering NVME drives have ridiculously low latencies compared to SSDs I don't think it would be a problem. No matter what, more RAM is always better than an L2ARC, if you have the space for it.
 
Last edited:

vrod

Dabbler
Joined
Mar 14, 2016
Messages
39
Cheers for your input. I did actually try to use one of the nvme's as a test datastore and it was pretty damn fast. However since I have currently 14tb available in the pool, I think i wanna try out the idea of them as mirrored SLOG. If I change my mind I guess it's just to remove them and repartition for new pool. :)

I used gpart to make the 20 gb partitons and then just added the nvmes as nvd0p1 and nvd1p1 with zpooladd. I guess using them as l2arc would just require to add through zpool add but as you mentioned it might just be smarter (and easier) to bump up the memory some more. Afterall it's just at home so 128gb of arc should be fine :D

The system is still very new and I have observium running on a vm to monitor everything. I'll see how things turn out. :)
 
Status
Not open for further replies.
Top