Hi all,
New here, and looking to implement FreeNAS in our company.
I have a nice project to re-use 8 Storage enclosures, each with 16 disk slots.
These machines came from a rendering and storage setup.
They had 10GBASE-SX4 network cards connected to two HP 6400CL switches.
We ripped out everything, except for the power supply, SATA backplane and fans.
I have built one of these as a test with 1 Xeon E5-2620V3 Processor on a ASUS Z10PE-D16 MB.
The drives used are 4x WD RED 6TB drives.
The motherboard has 2 CPU sockets and can hold up to 1TB DDR4 RAM.
It also has 10 x SATA3 6Gb/s ports and 1 M.2 connector.
Using the M.2 connector, you do lose 1 SATA port though.
The plan is to start with this setup:
We could link the HP switches to a CX4 to SFP+ media converter to connect the 10GB fibre from the media converter to a 10GB connection on one of our lan switches.
I had the idea to let specific workstations connect to the storage via the 10Gbe network, while normal office desktops would connect to the gigabit interfaces of the storage machines.
I hope this way, we can provide more bandwidth to multiple clients simultaneously.
This setup looks very flexible to me, as we can increase memory, cpu, disks,.. as we go
Does this look like a good setup to start with?
Reading up on raid/vdevs layout, i do realise i might need more drives initially to allow for an optimal raid/vdev configuration and the ability to extend the disk capacity afterwards.
I am still trying to figure out what the best raidZ/vdevs configuration would be to have a balanced setup ( reliability/performance).
But maybe I need to ask this question in a specific topic :)
New here, and looking to implement FreeNAS in our company.
I have a nice project to re-use 8 Storage enclosures, each with 16 disk slots.
These machines came from a rendering and storage setup.
They had 10GBASE-SX4 network cards connected to two HP 6400CL switches.
We ripped out everything, except for the power supply, SATA backplane and fans.
I have built one of these as a test with 1 Xeon E5-2620V3 Processor on a ASUS Z10PE-D16 MB.
The drives used are 4x WD RED 6TB drives.
The motherboard has 2 CPU sockets and can hold up to 1TB DDR4 RAM.
It also has 10 x SATA3 6Gb/s ports and 1 M.2 connector.
Using the M.2 connector, you do lose 1 SATA port though.
The plan is to start with this setup:
- 1 x CPU with 32GB ECC RAM
- 4x 6TB drives, connected to the SATA ports , using SSF 8470 to SATA breakout cable
- Re-using the 10GBASE-SX4 cards and HP switches to provide 10GB between the storage and our backup-server.
- m.2 32GB sata ssd as system drive (or as Log device , using a usb stick as startup/system disk )
- 2x 4 PCI sata ports extention cards
We could link the HP switches to a CX4 to SFP+ media converter to connect the 10GB fibre from the media converter to a 10GB connection on one of our lan switches.
I had the idea to let specific workstations connect to the storage via the 10Gbe network, while normal office desktops would connect to the gigabit interfaces of the storage machines.
I hope this way, we can provide more bandwidth to multiple clients simultaneously.
This setup looks very flexible to me, as we can increase memory, cpu, disks,.. as we go
Does this look like a good setup to start with?
Reading up on raid/vdevs layout, i do realise i might need more drives initially to allow for an optimal raid/vdev configuration and the ability to extend the disk capacity afterwards.
I am still trying to figure out what the best raidZ/vdevs configuration would be to have a balanced setup ( reliability/performance).
But maybe I need to ask this question in a specific topic :)
Last edited: