Hello everyone :)
I'm planning a new "dedicated" freenas build, i've used it alot on xenserver as test for nfs shares etc. But now i want to have it as my dedicated storage (The ds410 with 2TB disks is filled to the top )
My build:
Case: Norco RPC 4224 (the mighty well known 24disk case)
Motherboard: Supermicro MBD-X10SRL-F-O
Memory: Samsung M393A4K40BB0-CPB*
CPU: Intel Xeon E5-1620 v3 - Boxed
ZIL: 2x Intel DC S3700 Series - 100GB raid 1*
HDD's: 4x z2 vedv of 6 Western digital Red, 4TB (WD40EFRX)
PSU: An XFX 1000w model that's laying arround (so the expected load is ~30%, with enough headroom for startup)
L2ARC: 2 Samsung 840 evo's (120GB) that are laying arround (raid 0?)
The servers main purpose will be a media vault of all my photo's/video's, backups from all computers (and maybe mycloud, more on that later), VM storage (NFS shares) for xenserver, and last but not least. Possible off-site backups from friends which can only be done over the gigabit network, E-Sata is unfortunatly not an option in my rack.
Now my main questions;
I've read everywhere that there's a rule of thumb on the RAMM. 1 GB ramm for every TB of HDD. As i'm using at least 4TB drives (starting with z2 vedv of 6) due to the pricepoint (6TB is way to expensive for now, maybe in the future) i will be needing "at least" 96GB ramm. Also things like dedup/encryption are on the list, and possibly higher TB/drive counts.
32GB ramm modules are only like 20bucks more expensive then 2 16GB modules, so i think its a wise investment to buy 32GB modules (starting with 1 for the first vdev, slowly expanding as needed) So i can max out at the allmighty 256GB (8x 32gb). Or would 128GB be the max i'm ever going to need?
The xenserver lab (or esxi) will write a lot of resources, and data integrity is very high on the list as they will be running of the freenas machine instead of local raid (got 16 10K sas drives in them for the heaviest IO, but i prefer to use a dedicated zfs machine for bulk storage/non high io stuff like iso', boot devices etc.)
The ZIl will be a mirror of intel DC (Datacenter?) SSD's that are supposed to be very write resistant (PB's of data lifetime), do i need to make a RAID 1 array on the motherboard raid controller, or is freenas able to make a software raid 1 for the ZIL? (to much hassle to try out in the VM, i prefer to do it right on the dedicated machine)
Secondly, almost the same question for L2ARC, can freenas make a software raid 0 array for the L2arc, or do i need to make it myself on the raid controller? I've only seen the multiple boot medium install, which i will be doing to for max security.
As i'm slowly building it out (i cant just throw down 7K for a server :P) Can i just expand my zpool with a live dataset on it (the running vm's) or do i need to "cut them off" so all transactions to the Zpool are stopped before expansion?
Lastly, the freenas server will be running from an 1.5KVA apc rack psu if the power outlet decides it doesn't like my hungry server, For windows there's a software install that notifies you and shutsdown the computer automaticly, is this also possible in Freenas?
I'm not realy a BSD guy, but i do know a little about Centos (running a amazon EC2 vps with website/TLS secured) so i know my way around the CLI, but don't expect me to make a list of all subdirectories on the /xxx/xx/xx :)
I'm looking forward to your answers and suggestions, it will be the first "big build" after some synology machines that are painfully slow due to raid5 (~10-20mb/s :( ) but its their price, you can buy some nice synology for 3K, but you don't get ZFS!
I'm planning a new "dedicated" freenas build, i've used it alot on xenserver as test for nfs shares etc. But now i want to have it as my dedicated storage (The ds410 with 2TB disks is filled to the top )
My build:
Case: Norco RPC 4224 (the mighty well known 24disk case)
Motherboard: Supermicro MBD-X10SRL-F-O
Memory: Samsung M393A4K40BB0-CPB*
CPU: Intel Xeon E5-1620 v3 - Boxed
ZIL: 2x Intel DC S3700 Series - 100GB raid 1*
HDD's: 4x z2 vedv of 6 Western digital Red, 4TB (WD40EFRX)
PSU: An XFX 1000w model that's laying arround (so the expected load is ~30%, with enough headroom for startup)
L2ARC: 2 Samsung 840 evo's (120GB) that are laying arround (raid 0?)
The servers main purpose will be a media vault of all my photo's/video's, backups from all computers (and maybe mycloud, more on that later), VM storage (NFS shares) for xenserver, and last but not least. Possible off-site backups from friends which can only be done over the gigabit network, E-Sata is unfortunatly not an option in my rack.
Now my main questions;
I've read everywhere that there's a rule of thumb on the RAMM. 1 GB ramm for every TB of HDD. As i'm using at least 4TB drives (starting with z2 vedv of 6) due to the pricepoint (6TB is way to expensive for now, maybe in the future) i will be needing "at least" 96GB ramm. Also things like dedup/encryption are on the list, and possibly higher TB/drive counts.
32GB ramm modules are only like 20bucks more expensive then 2 16GB modules, so i think its a wise investment to buy 32GB modules (starting with 1 for the first vdev, slowly expanding as needed) So i can max out at the allmighty 256GB (8x 32gb). Or would 128GB be the max i'm ever going to need?
The xenserver lab (or esxi) will write a lot of resources, and data integrity is very high on the list as they will be running of the freenas machine instead of local raid (got 16 10K sas drives in them for the heaviest IO, but i prefer to use a dedicated zfs machine for bulk storage/non high io stuff like iso', boot devices etc.)
The ZIl will be a mirror of intel DC (Datacenter?) SSD's that are supposed to be very write resistant (PB's of data lifetime), do i need to make a RAID 1 array on the motherboard raid controller, or is freenas able to make a software raid 1 for the ZIL? (to much hassle to try out in the VM, i prefer to do it right on the dedicated machine)
Secondly, almost the same question for L2ARC, can freenas make a software raid 0 array for the L2arc, or do i need to make it myself on the raid controller? I've only seen the multiple boot medium install, which i will be doing to for max security.
As i'm slowly building it out (i cant just throw down 7K for a server :P) Can i just expand my zpool with a live dataset on it (the running vm's) or do i need to "cut them off" so all transactions to the Zpool are stopped before expansion?
Lastly, the freenas server will be running from an 1.5KVA apc rack psu if the power outlet decides it doesn't like my hungry server, For windows there's a software install that notifies you and shutsdown the computer automaticly, is this also possible in Freenas?
I'm not realy a BSD guy, but i do know a little about Centos (running a amazon EC2 vps with website/TLS secured) so i know my way around the CLI, but don't expect me to make a list of all subdirectories on the /xxx/xx/xx :)
I'm looking forward to your answers and suggestions, it will be the first "big build" after some synology machines that are painfully slow due to raid5 (~10-20mb/s :( ) but its their price, you can buy some nice synology for 3K, but you don't get ZFS!