Hello everyone,
im relatively new to freenas and planing on replacing an emc-graded nas (nfs) as a vm-storage since we get crappy usb-stick performance.
Our bandwidth throughput recommendations are not very high - at the moment a simple debian machine with 2 x 1 raid 1(mdadm) width a single gigabit nic satisfied our needs.
Im planning to build a supermicro 4 node system with an identical 4-node-backup-system together with zfs snapshot replication. In an event of a failure i would add the dedicated storage-ip to the backupsystem.
The reason for 4 nodes: splitting the workload onto 4 systems and even the chance of a failure will be splitted onto multiple nodes.
configuration provided by our distributor:
* System: Supermicro 4HE Chassis F424AS-R1K28B with Supermicro X10DRFR board
* CPU per node: 1 x INTEL Xeon E5-2603 V4 1700MHz 15M Cache 6Core
* MEM per node: 2 x 16GB (ECC Registered DDR4 2133)
* OS-STORAGE per node: 2 x Supermicro SATA DOM 32GB mirrored
* VM-STORAGE per node (node 1-3): 8 x 2TB WD Raid-Edition (WD2004FBYZ) - 2 x RAID10 - main data storage for filers etc.
* VM-STORAGE per node (node 4): INTEL SSD 535 Series 240GB 2.5in SATA 6Gb/s - 2 x RAID 10 - storage for databases (iops)
* ETHERNET per node: 2 x Intel® i350-AM2 (82574L) (onboard) - 1 x mgmnt-network, 1 x storage-network (since lcap doesn't double the bandwidth ;))
I think, i don't need any caching ssds or a higher amount of ram for these storage dimensions - am I right about this assumption?
Are there any pitfalls in this configuration?
Thank you in advance!
Kind regards
im relatively new to freenas and planing on replacing an emc-graded nas (nfs) as a vm-storage since we get crappy usb-stick performance.
Our bandwidth throughput recommendations are not very high - at the moment a simple debian machine with 2 x 1 raid 1(mdadm) width a single gigabit nic satisfied our needs.
Im planning to build a supermicro 4 node system with an identical 4-node-backup-system together with zfs snapshot replication. In an event of a failure i would add the dedicated storage-ip to the backupsystem.
The reason for 4 nodes: splitting the workload onto 4 systems and even the chance of a failure will be splitted onto multiple nodes.
configuration provided by our distributor:
* System: Supermicro 4HE Chassis F424AS-R1K28B with Supermicro X10DRFR board
* CPU per node: 1 x INTEL Xeon E5-2603 V4 1700MHz 15M Cache 6Core
* MEM per node: 2 x 16GB (ECC Registered DDR4 2133)
* OS-STORAGE per node: 2 x Supermicro SATA DOM 32GB mirrored
* VM-STORAGE per node (node 1-3): 8 x 2TB WD Raid-Edition (WD2004FBYZ) - 2 x RAID10 - main data storage for filers etc.
* VM-STORAGE per node (node 4): INTEL SSD 535 Series 240GB 2.5in SATA 6Gb/s - 2 x RAID 10 - storage for databases (iops)
* ETHERNET per node: 2 x Intel® i350-AM2 (82574L) (onboard) - 1 x mgmnt-network, 1 x storage-network (since lcap doesn't double the bandwidth ;))
I think, i don't need any caching ssds or a higher amount of ram for these storage dimensions - am I right about this assumption?
Are there any pitfalls in this configuration?
Thank you in advance!
Kind regards