BUILD Critique my new environment/freenas build for vmware p2v.

Status
Not open for further replies.

tastyratz

Cadet
Joined
Jul 24, 2014
Messages
1
So I have been doing a ton of searching and reading on this forum as well as trying my best at some planning. I really hate asking questions and forgive me if my freshness shines here. I was hoping for a little critique of my gameplan and I apologize for the wall of text, I wanted to be thorough:

I am looking to deploy freenas as my storage solution for a p2v virtualization project here at work. This will be used in production in a SMB. Seems like more iops for my $ than a rackstation.

I inherited a half dozen core 2 due desktop pc's with win 2k3 and a few gigs of ram at best. yikes.

my plan this far is to...

Buy:
-3x identical refurb machines from mrrackables.
*Supermicro Superchassis SC846E26-R1200B
*SAS2-846EL2 backplane
*Supermicro X8DT3-F motherboard
*Dual Intel 6 Cores X5650 2.66Ghz (12M Cache, 2.66 GHz, 6.40 GT)
*96GB DDR3 ECC REG Memory
*24x 3.5" Caddy for hard drives
*On Board LSI SAS1068E
*dual power supplies.
-SLOG - Provision 2gb on a seagate 600 pro (100 or 200gb, still guessing on tbw needs)
-l2arc - either a 200gb 600 pro or 240gb intel s3500
-hgst ultrastar 3tb 7200rpm sas drives (0B26886)
-(Maybe, if needed) IBM serveRAIDM1015 card, crossflashed
-Likely 10gbe point to point cards linking between hosts & storage.

1 machine will be used as freenas dedicated with all the spindles, 2 machines will be high availability vmware hosts on an essentials plus kit.

I understand that the cpu muscle is overkill but I like the fact that (and correct me if I am wrong) all three systems will be identical and I should in the event of some forms of failure be capable of pulling the usb stick holding vmware/freenas and play musical chairs for uptime (obviously including drives/controller).
at $1400 shipped base cost ea (no spindles) this seems reasonable for that benefit.

I have no idea what kind of storage needs we will have (file server has potential to explode) so I plan to begin populating it with 6x 3tb 7200rpm hitachi sas drives and run them as mirrored vdev's striped in the zpool (raid 10ish). I figure I should be able to add vdev's should I need more IO/space and grow the zpool that way, but stay relatively safe against failure disk failures.

I would like to use freenas to serve up via nfs instead of iscsi for safety/simplicity (and stick with sync writes).
Since we do run a local exchange/sql server and I plan on adding a few extra vm's I wanted to include a SLOG zil & l2arc as above.
No dedupe, but yes lzjb compression (& encryption since aes-ni on the x5650's).


Should that configuration net me similar nfs sync performance as iscsi targets?

planning on cyberjock's scrub & smart test schedule, e-mail alerts, apc ups with network card. lots of memtest & hdd burn in before deploying.

not finding a lot of information on the lsi 1068e supporting 6gb and playing nice in ahci mode with freenas. does anyone have direct experience with these boards that can confirm?

I have time to test and plan to attempt several break fix scenarios before deployment.

Anyone see any specific disaster recovery planning red flags thus far?

___

I am still figuring out my cold backup solution (we have none now, only synology nas replication/ghost). This is an area I still need more research in. with 24 bay caddies I wonder what the insertion ratings are and the realistic application of replication to disk rotation (I know so far people can frown on this). Saves me buying a tape drive since this is relatively a budget installation. Could happen on freenas or just handle backups in 1 of the hosts if it's that big of a deal.

Probably buying veam or storagecraft as well. Also considering cloud replication such as crashplan/backblaze.

Depending on usage I might buy another same hardware server and install local server 2012 r2 and use it for officescan/spiceworks/backups/wds/ other management misc.

If you got this far, thank you for taking the time to read. I appreciate any constructive criticism.
 
Last edited:
Status
Not open for further replies.
Top