How is this configuration for a dedicated NFS Share server?

Status
Not open for further replies.

Chirag K

Dabbler
Joined
Feb 6, 2017
Messages
10
Hello all,

FreeNAS newbie, putting together my first system..

I have been doing a lot of reading on this website and google about hardware recommendations.
We are planning to build a High Availability, High Density storage in a single Chassis. Here are the parts identified. I want to know your thoughts and suggestions.

Key requirements:
  • Large number of drives -- 24 down the line
  • High Speed networking
  • High availability - Uptime is important as this would host lots of VMs
  • 6Gb SAS minimum required (for 8087 cables- and SATA SSD drives)
I really really wanted to buy a used dell server + Norco instead of this whole setup because I LOVE iDRAC's remote console and it has saved me countless hours driving to data center, plus gives me dual powersupply.
I wanted to put the Intel 16 port SAS expander and run the cables over to external Norco DS-24D case (via 8087-8088 cables), but I am stuck when it comes to the HBA card --- I dont know if the LSI card below will work well and more importantly, the dell server will not have enough PCI slots... (I need 4 -- 1 for 10GB SFP, 1 for LSI HBA, 2 for SAS expanders and dell servers only have 3 - cheap servers dont have 10GbE)

Having said that....

Chassis: Norco RPC-4224


HBA/RAID Card: LSI SAS 9211-8i
How I will use the ports:
  • 1 8087 port will be used as RAID 1 for boot drive. Is that a good idea or bad idea with ZFS on root partition?
  • 2nd 8087 port will go to the Expander via 8087 to 8087 cable

SAS Expander: Intel RES2SV240NC

Motherboard: ASUS Rack EPC612D8
Memory: Kingston DDR4 ECC 32 GB to start with (motherboard support 128 I think)
Network: Chelsio 10 GbE SFP+
PowerSupply: 750W EVGA 80 Plus Gold

Total Cost = $1338 + misc items + tax = approx $1,700


PS == Forgot to mention that the storage will be setup as NFS share that only the servers can access. Users will not be able to get to it. They will only see the SR exposed in XenServer.

 
Last edited:

Chuck Remes

Contributor
Joined
Jul 12, 2016
Messages
173
First, please read the "sticky" threads at the top of this subforum. Definitely read the one on LSI cards. You can reflash the 9211-8i to IT mode (JBOD). Since you are asking this (fairly basic) question it implies you haven't completed your due diligence yet.

You can mirror your boot drives. It's a good idea especially if uptime is important. Let ZFS handle the mirroring though. Do not use any RAID cards at all; it will cost you.

You mentioned this system will run lots of virtual machines and it will be an NFS server. What density of drives will you be using? What's the access pattern of the data? If you need lots of IOPS then look into ZFS mirroring.

You will likely also want to put your VMs on a separate pool. Again, give us more details.
 

Chirag K

Dabbler
Joined
Feb 6, 2017
Messages
10
Thanks for the reply. My question was specifically about the hardware - especially the LSI card with the Intel SAS Expander.. You said don't use RAID cards at all -- so basically if I flash the card to IT Mode, it will act as just HBA and no longer be a RAID card. Is my understanding correct? Once that is done it will play nicely with ZFS and the Intel Expander? Or do I need to do something else to get it to work without issues?

How about the other components? I am positing this basic question so that the experts here can point out any known issues or things to watch out for before I run into it.

My plan for the Volumes is as follows (I assume I can team 2 10GbE to get 20 GbE network in FreeNAS):

Volume 1 - Used for storing boot drives of the VMs - it will be a Raid 6 equivalent (RAID Z2). with 6+2 500GB SSD (which will give me 3 TB usable size)
Volume 2 Data and databases - 6+2 4TB NAS HDD (giving me 64 TB space)
It will leave me 6 open slots so that should be good for now.
Ideally I will also add another volume for high usage DB (6+2 of 1 TB SSD, but no budget for that).

It appears from my reading that it is not possible to add more drives to a RaidZ2 (or Z1 or Z3). If it was possible that would be awesome (I know I can replace with larger capacity drives, but it would be better just to be able to add a drive and say here ZFS, use it...

Any other suggestions and feedback are welcome.
 

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
You can add vdevs. If you have 8 disks in a z2 you can add another 8 drive vdev.

How much iops/bandwith do you need?
For vm storage mirrors are better than raidz2 (more iops)...
Especially don't throw high used databases on the slow raidz2. You should put them on mirrors / ssds


Gesendet von iPhone mit Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
FWIW, the intel SAS expander has a molex plug and AFAIK
doesn't have to be installed in a PCI slot.
 

Chirag K

Dabbler
Joined
Feb 6, 2017
Messages
10
Thanks for the replies and suggestions.

Whats your opinion on this:

http://www.ebay.com/itm/142264169040

Supermicro 4U 36 Bay Storage Server SAS2 FREENAS 2x E5-2660 8 Core SATA-Dom 64GB

It has everything I want, except that its used and I will have to add 10GB network card. On positive side its 36 bay so it leaves room for expanding.

I saw cheaper ones too, but most of them do not support 6Gbps backplane. They are only 3 Gbps.
 
Status
Not open for further replies.
Top