vegaman
Explorer
- Joined
- Sep 25, 2013
- Messages
- 58
I'm just about ready to rebuild my storage and trying to make a call on how to configure it in terms of RAIDZ levels and width.
The main use is as a media server, then local backups (to then be sent offsite as well) and I'll likely move things from my linux server (home automation, backup management scripts... nothing too resource hungry) onto it if I'm happy with the VM/docker support in FreeNAS these days.
Server is:
Supermicro X10SL7-F
32GB ECC RAM
SAS-9211-8i card of some description - I think it was IBM, not that it matters much now it's got new IT mode firmware on it
Norco RPC-4220 case
20x 6TB WD Red drives
1x SSD as a boot drive, I've got another and a spare SATA port if it's worth mirroring that
Plan to add 10GbE once I can decide on a switch - I definitely don't need more than 24 ports for now, could make do with a lot less if it's cheaper too, a bunch of things I have plugged in are low traffic so could hang off a second 1Gbit switch quite happily
I previously had been adding drives as I could then it's been left untouched/unused while I travel around for work. Now I'm home again for a while, I've bought more RAM (was only on 8GB previously), some Noctua iPPC-3000 Fans and a whole bunch of drives. I'm just in the process of juggling data between drives so I have all the new drives available and hopefully avoid pulling down backups. Once that's done it will be time to rebuild my NAS.
Getting advice from workmates who work more in the storage space is difficult because we only deal with enterprise gear where they don't have to deal with decisions this far down so much (nor with such cheap gear) - one of them said to just chuck all the drives in one massive RAIDZ2. Would I be better doing 2x 10 drive RAIDZ2's instead?
The main use is as a media server, then local backups (to then be sent offsite as well) and I'll likely move things from my linux server (home automation, backup management scripts... nothing too resource hungry) onto it if I'm happy with the VM/docker support in FreeNAS these days.
Server is:
Supermicro X10SL7-F
32GB ECC RAM
SAS-9211-8i card of some description - I think it was IBM, not that it matters much now it's got new IT mode firmware on it
Norco RPC-4220 case
20x 6TB WD Red drives
1x SSD as a boot drive, I've got another and a spare SATA port if it's worth mirroring that
Plan to add 10GbE once I can decide on a switch - I definitely don't need more than 24 ports for now, could make do with a lot less if it's cheaper too, a bunch of things I have plugged in are low traffic so could hang off a second 1Gbit switch quite happily
I previously had been adding drives as I could then it's been left untouched/unused while I travel around for work. Now I'm home again for a while, I've bought more RAM (was only on 8GB previously), some Noctua iPPC-3000 Fans and a whole bunch of drives. I'm just in the process of juggling data between drives so I have all the new drives available and hopefully avoid pulling down backups. Once that's done it will be time to rebuild my NAS.
Getting advice from workmates who work more in the storage space is difficult because we only deal with enterprise gear where they don't have to deal with decisions this far down so much (nor with such cheap gear) - one of them said to just chuck all the drives in one massive RAIDZ2. Would I be better doing 2x 10 drive RAIDZ2's instead?