DemohFoxfire
Dabbler
- Joined
- May 2, 2023
- Messages
- 11
4 months into inheriting a small datacenter (~4 racks) running mostly HP G6 / G7 24 bay servers using 10gb iSCSI with each freenas/truenas server with running all drives into single raidz2. These are iSCSI targets for ESXi and its a 50/50 spread of 2.5 SAS disks and consumer level 2.5 SSDs. Im not planning on touching that right now. The 5 or so veeam servers are all windows servers with individual single disks (not even jbod) as backup repositories. Im writing the entirety of the setup off as a total loss and going to rebuild from scratch over the next 18 months.
Ive been around FreeNAS/TrueNAS service / repair / replace a few dozen of these 24 bay units but have always been handed the hardware "This is what you get" so I never was on the spec or tuning side of things here so I would say a bit inexperienced.
The Goal:
50TB + server which can get close to or exceed 25GBE. Expand to 100+TB
The ability to receive multiple backup jobs at once. The current company method are backup chains 20+ long across multiple servers *crying here*
(future) 2nd server as a backup copy destination for redundancy / different retention policies.
Design is modular - Ill likely need to deploy many more than just 2 but this setup would be the largest single storage capacity;
Ive been reading a lot about optimizing for 10gbe and faster speeds, optimizing zfs, the network, etc, I think I need to take a step back and get some fundamentals down. Instead of emulating the current design with raidz2 maybe I should just be using mirrors?
I was thinking using an HPE Apollo 4200 LFF chassis as a base with boot & any other truenas specific drives populated in the rear on mirrored sets. My end goal is to end up fully populating the unit with 24x X sized 3.5 SAS drives but since I dont need 100+ TB right now I am hoping to start small, maybe with 8x drives. Im not married to the Apollo because I have plenty of rack space, I just would like to have everything consolidated (more drives in less servers) instead of being limited to how many drives we can jam into a single chassis. Or using DL380s exclusively and using 2u expansions to additional controllers external sas cables.
Is RaidZ2 fine?
Should I end up with multiple pools / vdevs, IE single 24 drive pool vs 3x 8 drive pools?
What does capacity expansion look like? IE start with an 8 drive pool, if I add another 8 drives is it best to add them to the existing pool or create a 2nd pool?
Any direction would be appreciated so I could potentially cut down weeks of readying / research between all the other fires I am putting out. I figure I will play around with the backup infrastructure prior to attempting a 24x NVMe build for the actual production data so my mistakes arent as expensive.
Ive been around FreeNAS/TrueNAS service / repair / replace a few dozen of these 24 bay units but have always been handed the hardware "This is what you get" so I never was on the spec or tuning side of things here so I would say a bit inexperienced.
The Goal:
50TB + server which can get close to or exceed 25GBE. Expand to 100+TB
The ability to receive multiple backup jobs at once. The current company method are backup chains 20+ long across multiple servers *crying here*
(future) 2nd server as a backup copy destination for redundancy / different retention policies.
Design is modular - Ill likely need to deploy many more than just 2 but this setup would be the largest single storage capacity;
Ive been reading a lot about optimizing for 10gbe and faster speeds, optimizing zfs, the network, etc, I think I need to take a step back and get some fundamentals down. Instead of emulating the current design with raidz2 maybe I should just be using mirrors?
I was thinking using an HPE Apollo 4200 LFF chassis as a base with boot & any other truenas specific drives populated in the rear on mirrored sets. My end goal is to end up fully populating the unit with 24x X sized 3.5 SAS drives but since I dont need 100+ TB right now I am hoping to start small, maybe with 8x drives. Im not married to the Apollo because I have plenty of rack space, I just would like to have everything consolidated (more drives in less servers) instead of being limited to how many drives we can jam into a single chassis. Or using DL380s exclusively and using 2u expansions to additional controllers external sas cables.
Is RaidZ2 fine?
Should I end up with multiple pools / vdevs, IE single 24 drive pool vs 3x 8 drive pools?
What does capacity expansion look like? IE start with an 8 drive pool, if I add another 8 drives is it best to add them to the existing pool or create a 2nd pool?
Any direction would be appreciated so I could potentially cut down weeks of readying / research between all the other fires I am putting out. I figure I will play around with the backup infrastructure prior to attempting a 24x NVMe build for the actual production data so my mistakes arent as expensive.