NAS suggestions for an ESX lab

Status
Not open for further replies.

plissje

Dabbler
Joined
Mar 6, 2017
Messages
22
Hi guys,
After a while of reading I've finally decided to post my issues after more or less understanding the path I would like to take. (or maybe not :))

I work at a small office (10~ people) and we are looking into a NAS system that will hold our testing lab.
The lab is used by only about half of the people at the office, and almost never more then 2 (max 3) people at once. It runs all the regular domain components (dc,exchange) and an applications server which hosts SQL and some other stuff. In addition we have our system specific VMs that we do our tests on.
Currently we have a virtualized FreeNAS running with x4 1TB disks in RAID-10, and a single SSD as ZIL (I know, I know. This is basically playing a Russian roulette. This setup is there with the understanding it might fail any day and since this mostly used for me to get a hang of FreeNAS+well, its not a prod env and never will makes its it sorta ok.). This server resides on one of our 2 ESX hosts which are the lab infrastructure. Storage for the VMs is currently provided via NFS.

The performance is quite fine for our needs, up until the point where we reach a certain amount of VMs and then it falls off, pretty hard. When that started happening and after I played around with what I needed on our FreeNAS VM, I've decided its time to build a dedicated (mostly low budget) machine to host our FreeNAS. From what I've read and seeing, this should help with our performance issues.

The plan is to build the FreeNAS system and connect it via a separate switch to NICs in the ESX hosts dedicated only for the storage network.
The system will boot off a USB thumb drive, run our current 4 disk raid with 2 mirrored SSDs as ZIL.

I've already bought a ASRock 2750 (that was before knowing about the die-offs. I'm currently checking with the store\ASRock support whats my options here). So I guess for now this will be my starting point.
In the RAM department, I was going for these Crucial 2x8 ECC unbuffered memory. They seem to be suitable both by my budget and needs.
The only question here is I'm not really sure if I need 16 or 32 GB here. Since I'm working with a pretty low amount of storage and no L2ARC, I think 16 should suffice right?
For the case SilverStone DS380B seems like a popular pick and is in budget. For the PSU I still haven't picked but I guess most Gold ones would do.

Would appreciate any remarks regarding the plan build and usage beyond the RAM question here :).

I also have 2 more issues I can't seem to 100% decide on:
1) Should I keep my RAID-10 setup or switch to something like RAIDZ? From what I read RAID-10 should be the fastest option for hosting VMs.
2) Should I stick with NFS or move to iSCSI? This topic is apparently quite split in opinions. Most do agree that iSCSI is faster, but I don't think my budget allows me to keep with the 50% rule.


Thanks for anyone who read the whole thing and would dedicate his time to provide me with some wisdom. It is greatly appreciated.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi guys,
After a while of reading I've finally decided to post my issues after more or less understanding the path I would like to take. (or maybe not :))

I work at a small office (10~ people) and we are looking into a NAS system that will hold our testing lab.
The lab is used by only about half of the people at the office, and almost never more then 2 (max 3) people at once. It runs all the regular domain components (dc,exchange) and an applications server which hosts SQL and some other stuff. In addition we have our system specific VMs that we do our tests on.
Currently we have a virtualized FreeNAS running with x4 1TB disks in RAID-10, and a single SSD as ZIL (I know, I know. This is basically playing a Russian roulette. This setup is there with the understanding it might fail any day and since this mostly used for me to get a hang of FreeNAS+well, its not a prod env and never will makes its it sorta ok.). This server resides on one of our 2 ESX hosts which are the lab infrastructure. Storage for the VMs is currently provided via NFS.

The performance is quite fine for our needs, up until the point where we reach a certain amount of VMs and then it falls off, pretty hard. When that started happening and after I played around with what I needed on our FreeNAS VM, I've decided its time to build a dedicated (mostly low budget) machine to host our FreeNAS. From what I've read and seeing, this should help with our performance issues.

The plan is to build the FreeNAS system and connect it via a separate switch to NICs in the ESX hosts dedicated only for the storage network.
The system will boot off a USB thumb drive, run our current 4 disk raid with 2 mirrored SSDs as ZIL.

I've already bought a ASRock 2750 (that was before knowing about the die-offs. I'm currently checking with the store\ASRock support whats my options here). So I guess for now this will be my starting point.
In the RAM department, I was going for these Crucial 2x8 ECC unbuffered memory. They seem to be suitable both by my budget and needs.
The only question here is I'm not really sure if I need 16 or 32 GB here. Since I'm working with a pretty low amount of storage and no L2ARC, I think 16 should suffice right?
For the case SilverStone DS380B seems like a popular pick and is in budget. For the PSU I still haven't picked but I guess most Gold ones would do.

Would appreciate any remarks regarding the plan build and usage beyond the RAM question here :).

I also have 2 more issues I can't seem to 100% decide on:
1) Should I keep my RAID-10 setup or switch to something like RAIDZ? From what I read RAID-10 should be the fastest option for hosting VMs.
2) Should I stick with NFS or move to iSCSI? This topic is apparently quite split in opinions. Most do agree that iSCSI is faster, but I don't think my budget allows me to keep with the 50% rule.

Thanks for anyone who read the whole thing and would dedicate his time to provide me with some wisdom. It is greatly appreciated.
I recommend installing 32GB, especially if you're considering using the FreeNAS server to provide block storage via iSCSI. The documentation states "For iSCSI, install at least 16 GB of RAM if performance is not critical, or at least 32 GB of RAM if good performance is a requirement" (emphasis added). The additional memory will help performance even if you stick with NFS datastores.

Note that the standard advice around here is to use 1 x 16GB stick of RAM in lieu of 2 x 8GB sticks when installing 16GB in a system like the C2750. This gives you the future option of installing the maximum memory in the system (64GB = 4 x 16G UDIMMs) without having to discard the smaller-capacity memory modules.

You are correct that, in general, mirrored configurations provide better performance when used as datastores.

Good luck!
 
Joined
Dec 25, 2016
Messages
9
Note that the standard advice around here is to use 1 x 16GB stick of RAM in lieu of 2 x 8GB sticks when installing 16GB in a system like the C2750. This gives you the future option of installing the maximum memory in the system (64GB = 4 x 16G UDIMMs) without having to discard the smaller-capacity memory modules.

Possibly a silly question, but better to ask than not know - is there any significant performance impact between 1x16GB and 2x8GB RAM configuration?
 

plissje

Dabbler
Joined
Mar 6, 2017
Messages
22
I recommend installing 32GB, especially if you're considering using the FreeNAS server to provide block storage via iSCSI. The documentation states "For iSCSI, install at least 16 GB of RAM if performance is not critical, or at least 32 GB of RAM if good performance is a requirement" (emphasis added). The additional memory will help performance even if you stick with NFS datastores.

Note that the standard advice around here is to use 1 x 16GB stick of RAM in lieu of 2 x 8GB sticks when installing 16GB in a system like the C2750. This gives you the future option of installing the maximum memory in the system (64GB = 4 x 16G UDIMMs) without having to discard the smaller-capacity memory modules.

You are correct that, in general, mirrored configurations provide better performance when used as datastores.

Good luck!

The question still remains whether I should go for the iSCSI route with my limitation :). Also, seems like 16GB unbuffered UDIMM sticks are kind of hard to come buy and are pretty pricey. The only one I could find is by Kingston and people recommend to avoid sticks from them.

Possibly a silly question, but better to ask than not know - is there any significant performance impact between 1x16GB and 2x8GB RAM configuration?

As far as I know these differences (or generally single vs dual channel) are only noticeable in high end system which work under heavy load.
Most of us will probably never see the difference and that's why my decision here is mostly budget/future upgrade based.
 
Joined
Dec 25, 2016
Messages
9
As far as I know these differences (or generally single vs dual channel) are only noticeable in high end system which work under heavy load.
Most of us will probably never see the difference and that's why my decision here is mostly budget/future upgrade based.

Thanks for the follow-up. Hopefully I can follow the consensus logic and adopt the 1x16GB route (for future expandibility), but like you, I'm struggling with the price of memory on the QVL. Will just have to save some more and hope for a good tax return, I guess. Do it once, do it right, and whatnot.

Good luck with your build!
 
Status
Not open for further replies.
Top