Hi guys,
After a while of reading I've finally decided to post my issues after more or less understanding the path I would like to take. (or maybe not :))
I work at a small office (10~ people) and we are looking into a NAS system that will hold our testing lab.
The lab is used by only about half of the people at the office, and almost never more then 2 (max 3) people at once. It runs all the regular domain components (dc,exchange) and an applications server which hosts SQL and some other stuff. In addition we have our system specific VMs that we do our tests on.
Currently we have a virtualized FreeNAS running with x4 1TB disks in RAID-10, and a single SSD as ZIL (I know, I know. This is basically playing a Russian roulette. This setup is there with the understanding it might fail any day and since this mostly used for me to get a hang of FreeNAS+well, its not a prod env and never will makes its it sorta ok.). This server resides on one of our 2 ESX hosts which are the lab infrastructure. Storage for the VMs is currently provided via NFS.
The performance is quite fine for our needs, up until the point where we reach a certain amount of VMs and then it falls off, pretty hard. When that started happening and after I played around with what I needed on our FreeNAS VM, I've decided its time to build a dedicated (mostly low budget) machine to host our FreeNAS. From what I've read and seeing, this should help with our performance issues.
The plan is to build the FreeNAS system and connect it via a separate switch to NICs in the ESX hosts dedicated only for the storage network.
The system will boot off a USB thumb drive, run our current 4 disk raid with 2 mirrored SSDs as ZIL.
I've already bought a ASRock 2750 (that was before knowing about the die-offs. I'm currently checking with the store\ASRock support whats my options here). So I guess for now this will be my starting point.
In the RAM department, I was going for these Crucial 2x8 ECC unbuffered memory. They seem to be suitable both by my budget and needs.
The only question here is I'm not really sure if I need 16 or 32 GB here. Since I'm working with a pretty low amount of storage and no L2ARC, I think 16 should suffice right?
For the case SilverStone DS380B seems like a popular pick and is in budget. For the PSU I still haven't picked but I guess most Gold ones would do.
Would appreciate any remarks regarding the plan build and usage beyond the RAM question here :).
I also have 2 more issues I can't seem to 100% decide on:
1) Should I keep my RAID-10 setup or switch to something like RAIDZ? From what I read RAID-10 should be the fastest option for hosting VMs.
2) Should I stick with NFS or move to iSCSI? This topic is apparently quite split in opinions. Most do agree that iSCSI is faster, but I don't think my budget allows me to keep with the 50% rule.
Thanks for anyone who read the whole thing and would dedicate his time to provide me with some wisdom. It is greatly appreciated.
After a while of reading I've finally decided to post my issues after more or less understanding the path I would like to take. (or maybe not :))
I work at a small office (10~ people) and we are looking into a NAS system that will hold our testing lab.
The lab is used by only about half of the people at the office, and almost never more then 2 (max 3) people at once. It runs all the regular domain components (dc,exchange) and an applications server which hosts SQL and some other stuff. In addition we have our system specific VMs that we do our tests on.
Currently we have a virtualized FreeNAS running with x4 1TB disks in RAID-10, and a single SSD as ZIL (I know, I know. This is basically playing a Russian roulette. This setup is there with the understanding it might fail any day and since this mostly used for me to get a hang of FreeNAS+well, its not a prod env and never will makes its it sorta ok.). This server resides on one of our 2 ESX hosts which are the lab infrastructure. Storage for the VMs is currently provided via NFS.
The performance is quite fine for our needs, up until the point where we reach a certain amount of VMs and then it falls off, pretty hard. When that started happening and after I played around with what I needed on our FreeNAS VM, I've decided its time to build a dedicated (mostly low budget) machine to host our FreeNAS. From what I've read and seeing, this should help with our performance issues.
The plan is to build the FreeNAS system and connect it via a separate switch to NICs in the ESX hosts dedicated only for the storage network.
The system will boot off a USB thumb drive, run our current 4 disk raid with 2 mirrored SSDs as ZIL.
I've already bought a ASRock 2750 (that was before knowing about the die-offs. I'm currently checking with the store\ASRock support whats my options here). So I guess for now this will be my starting point.
In the RAM department, I was going for these Crucial 2x8 ECC unbuffered memory. They seem to be suitable both by my budget and needs.
The only question here is I'm not really sure if I need 16 or 32 GB here. Since I'm working with a pretty low amount of storage and no L2ARC, I think 16 should suffice right?
For the case SilverStone DS380B seems like a popular pick and is in budget. For the PSU I still haven't picked but I guess most Gold ones would do.
Would appreciate any remarks regarding the plan build and usage beyond the RAM question here :).
I also have 2 more issues I can't seem to 100% decide on:
1) Should I keep my RAID-10 setup or switch to something like RAIDZ? From what I read RAID-10 should be the fastest option for hosting VMs.
2) Should I stick with NFS or move to iSCSI? This topic is apparently quite split in opinions. Most do agree that iSCSI is faster, but I don't think my budget allows me to keep with the 50% rule.
Thanks for anyone who read the whole thing and would dedicate his time to provide me with some wisdom. It is greatly appreciated.