System Design/Build and Performance Questions

Status
Not open for further replies.

jeebs01

Dabbler
Joined
Jun 23, 2012
Messages
16
Summary

I am migrating a solution from an OpenFiler system that services the following features:

VMware ESXi iSCSI for ~25 VMs, spinning 5-10 on demand
Backup DataStore
Bulk File Storage
2x 1080 MythTV Recording Steams

I recently added a couple of database centric applications that had simply too many writes across my current disk setup and I need to expand my system.

This setup is in a personal residence, but is a demo lab for enterprise technologies, bulk storage, multimedia, etc.

My Current Setup
OpenFiler 2.99 (feels crusty after testing FreeNAS)
HighPoint 4320 SATAII RAID Controller
Intel RESV240 SAS Exapnder
Asus Z8PE-D12 Motherboard
Xeon 5606 Proc
24 GB ECC Memory
8x 2TB WD Green (Bulk)
8x 500GB WD Scorpio (ESXi Primary)
3x 750GB WD Scorpio (ESXi Bulk/Spares - the "on demand machines)
2x 1TB WD drives (Backup)

Notes: The RAID Controller is waaay over taxed once I have added the database applications on my system. Things run OK if I turn off a few VM's. I feel its just the constant writes that is hosing my setup.

The New Proposed System
Keeping the core setup, I plan to add:

LSI 9260 RAID Controller
2x 500GB Samsung 850 EVO (Intended for database and other high i/o applications)
Additional Memory, 24GB -> 32GB
Add 2x 500GB WD Scorpio Black (no longer used in laptop)

The system would have the following intended setups, with all drives JBOD instead of using RAID on the controllers. This system will have the drives split between the two controllers. I planned for 8x 2TB RAID2Z to remain on the HighPoint 4320 with the remaining drives connecting on the LSI 9260 and RESV240.

8x 2TB RAID2Z
10x 500GB RAID10
3x 750GB RAIDZ (Known shortcomings here, used bulk VM storage, lab, etc.)
2x 1TB WD for Backups (RAID1)
2x 500GB SSD (RAID1)

Questions That I Have Had

VDEV -> ZPOOL Mapping

If I add the SSD's VDEV and the ESXi Primary VDEV (10x 500GB) to the same ZPOOL, will high-access data be moved from the Primary VDEV to the SSD VDEV automatically? I have done some reading and it seems like this can be a feature of ZFS (?) but I wasn't sure if it was available and/or a good thought. I would add a couple more SSD's to go to a RAID 10 to help alleviate redundancy concerns if this is possible.

ZIL/L2ARC

Age old question based on all the talk on this forum. Would this be beneficial for this layout? I get the impression that the setup would add complication with minimal benefit for me. While the database will create a handful of sequential writes, I didn't get the impression it would be worth it. As for the L2ARC, I do not plan on having more than 4x 1Gbps NICs.

iSCSI v NFS for ESXi

I have always ran iSCSI, like the block level share, but understand its a different setup in FreeNAS vs. OpenFiler. I read that both require tuning to be more efficient. Quantitative thoughts one way or another? I will admit that as I am typing this, I have not looked too deeply in side-by-side speed comparisons.

HotSwapping Backup Drive

I hot swap a backup drive and keep the data in a safe onsite. I rotate the drives regularly. I have read that hot-swap isn't a problem so long as the RAID controller under it supports it. Is there any odd things to expect when legacy data is put back into the system? Would it be easier to keep this in RAID hardware or let the VDEV handle it? I would expect a VDEV to rebuild just as any other RAID system would, but was curious about the impact on the overall system for a scheduled 1TB RAID1 rebuild.

Closing

Thanks for reading if you have made it this far. If there is any suggestions for additional reading, please feel free to point the direction. I have began reading into various parts of the 9.3 Docs as well as a great power point introduction. There is also a large quantity of ZFS centric documentation around the web as well. Once the hardware is figured out, it will be on to the migration strategy. :)
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
you will need much more ram. and no raid controller. just plain sas HBAs. there is no auto moving (regaring your ssd question)

your questions shows, that you really need to learn much more regarding zfs.
get yourself a FN test setup, create your lab environment. when you got a more advanced understanding of FN and zfs, then we can talk about migrating or if iscsi makes more sense then nfs and why. but right now, you are miles away, running it on a productive maschine. or talking about what hw you would need.
 

msignor

Dabbler
Joined
Jan 21, 2015
Messages
28
Sorry to hijack - what kind of performance are you seeing with the 8x Scorpio Drives? I can only manage to get 250 iops out of em at best in a raid 6.
 

jeebs01

Dabbler
Joined
Jun 23, 2012
Messages
16
@zambanini - Fair enough. I knew RAM was/is always a shortcoming with ZFS and that the deeper knowledge of its inner workings was limited.

@msignor - I know the iops is limited with the Scorpio's. My handful of VM's that have a higher demand would be moved to the SSD's. I was not intending to make something out of nothing, but to align the needs appropriately.

Thanks for reading folks. I suppose this is tabled for the near future to determine the best fit.
 

msignor

Dabbler
Joined
Jan 21, 2015
Messages
28
@msignor - I know the iops is limited with the Scorpio's. My handful of VM's that have a higher demand would be moved to the SSD's. I was not intending to make something out of nothing, but to align the needs appropriately.
.

I was curious because I have the same exact setup. How is your performance? I am only able to get about 250 or so iops. SMB and CFS etc are nice and quick though!
 
Status
Not open for further replies.
Top