I have been doing alot of reading and smalls testing of FreeNas the past few weeks (including the EXCELLENT intro pdf PP doc, i have never seen such a good, detailed intro into a community before!), im hoping for some recommendations or guidance on my new setup as im slowly buying parts now.
Load-wise, What i want from this FN build (plain FN only will be running on this build, no VMs/jails nor FN plugins):
+ hosting of ESX datastores ( for my 2x local 10gbit esxi "lab hosts", nothing production)
+ SMB/CIFS file shares for my windows work PCs (i work from home, PCs have 1gbit Ethernet)
+ backup space for larger offsite files, that will be sent to this freenas
+ maybe will have my home DVR recording to this freenas (via NFS or iSCSI)- maybe.
My disks and enclosures will be slowly moved from my current setup (a mess of Raid cards + windows 08r2 server linking to supermicro enclosures- a 4u x24bay , a 2nd 4u x24bay and a 2u x 12bay - it has been surprisingly fast/reliable for 4+ years now though, important data is backup'd to B2).
(all equipment will be on a 42u rack)
The freenas host will run on:
My main question is in regards to which Raidcards / Expanders should i be considering and how should i connect/layout the external 4u and 2u "disk shelf's" (which will physically be racked atop each other, all in a 42u rack) - or if i should be breaking this out into 2 or 3x independent freenas systems (im trying to avoid that to save on power/hw costs, but am open to it)
(im not necessarily looking to go the least expensive route, but rather the better, more tested route).
Im currently considering on the main 4u host (sys running FN):
(8x of the 24x bays will be connected to the X9 MB via the intel C606 chipset, like so:
LSI SAS 9207-8i -> intel res2sv240 expander (and xTB HGST disks)
To uplink this host to the "Disk Shelfs", a LSI SAS 9207-8e:
On "disk shelf #1" ( 4u supermicro 24bay)
Disk Shelf #1 bays 1-24- xTB HGST disks
from host SFF-8088 #1 of the 9207-8e up to either a intel res2sv240 (OR buying a SM BPN-SAS2-846EL1 expander backplane instead of the res2sv240)
On "disk shelf #2" ( 2u supermicro 12bay)
Disk Shelf #2 bays 1-12- xTB HGST disks
from host , SFF-8088 #2 of the 9207-8e up to its BPN-SAS2-826EL1 expander backplane
2 main questions: (or please, address other elements too as i have not ordered the "disk infrastructure" yet):
1- Should i be looking at SAS 3000 gen cards (sas3) for performance ? (ie a lsi 9300-8i vs the 9207-8i im going with?) Even though all my Expanders will be SAS2 (or should i consider spending $ for SAS3 expanders also?)
2- Im guessing i should keep vDEVs and pools isolated to a single chasis/disk shelf? ie i should NEVER have a 2x VDEV Pool consisting of vdev #1 with disks from "the 2u disk shelf #1" and vdev #2 with disks from "the 4u disk shelf #2" , correct?
(ie what were to happen if *ONLY* "disk shelf #2" were to loose power, ie its PSU dies , but all else is running/fine)
(note, i do have alot of time for this, so i will be slowly adding drives to this system on a test bench and testing/benchmarking many scenarios for a few weeks before committing, for now im really just trying to get the hardware and concepts right, to avoid wasting $ on the wrong parts/setup). I will post my results/journey here, or in a new thread.
Thank you.
Load-wise, What i want from this FN build (plain FN only will be running on this build, no VMs/jails nor FN plugins):
+ hosting of ESX datastores ( for my 2x local 10gbit esxi "lab hosts", nothing production)
+ SMB/CIFS file shares for my windows work PCs (i work from home, PCs have 1gbit Ethernet)
+ backup space for larger offsite files, that will be sent to this freenas
+ maybe will have my home DVR recording to this freenas (via NFS or iSCSI)- maybe.
My disks and enclosures will be slowly moved from my current setup (a mess of Raid cards + windows 08r2 server linking to supermicro enclosures- a 4u x24bay , a 2nd 4u x24bay and a 2u x 12bay - it has been surprisingly fast/reliable for 4+ years now though, important data is backup'd to B2).
(all equipment will be on a 42u rack)
The freenas host will run on:
- a 4u 24bay ( direct attached backplane, 846-TQ)
- X9DR3-LN4F+ , 192gb ECC ram, 2x e5-2602v2 cpus.
- All HDDs will be 2tb / 3tb / 4tb HGST , mostly enterprise sata/sas disks (i already have ~48x + about 12x more sitting on shelf)
My main question is in regards to which Raidcards / Expanders should i be considering and how should i connect/layout the external 4u and 2u "disk shelf's" (which will physically be racked atop each other, all in a 42u rack) - or if i should be breaking this out into 2 or 3x independent freenas systems (im trying to avoid that to save on power/hw costs, but am open to it)
(im not necessarily looking to go the least expensive route, but rather the better, more tested route).
Im currently considering on the main 4u host (sys running FN):
(8x of the 24x bays will be connected to the X9 MB via the intel C606 chipset, like so:
- Local bay 1- my FN SSD boot
- Local bay 2,3- mirrored s3700 SSDs as ZIL (may change this to a p3700 nvme pcie)
- Local bay 4- for L2arc (on some ssd)
- Local bays 5,6,7,8- xTB HGST disks
LSI SAS 9207-8i -> intel res2sv240 expander (and xTB HGST disks)
To uplink this host to the "Disk Shelfs", a LSI SAS 9207-8e:
On "disk shelf #1" ( 4u supermicro 24bay)
Disk Shelf #1 bays 1-24- xTB HGST disks
from host SFF-8088 #1 of the 9207-8e up to either a intel res2sv240 (OR buying a SM BPN-SAS2-846EL1 expander backplane instead of the res2sv240)
On "disk shelf #2" ( 2u supermicro 12bay)
Disk Shelf #2 bays 1-12- xTB HGST disks
from host , SFF-8088 #2 of the 9207-8e up to its BPN-SAS2-826EL1 expander backplane
2 main questions: (or please, address other elements too as i have not ordered the "disk infrastructure" yet):
1- Should i be looking at SAS 3000 gen cards (sas3) for performance ? (ie a lsi 9300-8i vs the 9207-8i im going with?) Even though all my Expanders will be SAS2 (or should i consider spending $ for SAS3 expanders also?)
2- Im guessing i should keep vDEVs and pools isolated to a single chasis/disk shelf? ie i should NEVER have a 2x VDEV Pool consisting of vdev #1 with disks from "the 2u disk shelf #1" and vdev #2 with disks from "the 4u disk shelf #2" , correct?
(ie what were to happen if *ONLY* "disk shelf #2" were to loose power, ie its PSU dies , but all else is running/fine)
(note, i do have alot of time for this, so i will be slowly adding drives to this system on a test bench and testing/benchmarking many scenarios for a few weeks before committing, for now im really just trying to get the hardware and concepts right, to avoid wasting $ on the wrong parts/setup). I will post my results/journey here, or in a new thread.
Thank you.