Splitting Pool Recommendations & Suggestions Needed for System Rebuild

Status
Not open for further replies.

Jerzy Sobski

Explorer
Joined
Mar 6, 2015
Messages
50
Background: Two weeks ago my server running under Freenas 11.2Beta2 had a drive go down resulting in server rebooting and locking up on import of a pool. Through help on this forum and reporting it as a Bug, I learned this was a issue with using port replicator to manage all drives. (Link to posts on this at: https://forums.freenas.org/index.ph...-crash-and-now-locks-up-on-pool-import.69028/ ) In researching and learning as much as possible I have decided that along with rebuilding the server to go with HBA cards and getting off the port Multiplier, I am also looking at moving all data to new drives and splitting it into separate pools. At present the server continues to remain down till I have new system rebuilt which I expect to be done within a week. For now I an trying to prepare for this once it is built. When I first started using Freenas I had no idea what I was doing and did not understand how ZFS worked. Since then I have been using the forums to learn as much as possible.



Current pool configuration: 3 VDEVs x 5 Drives per VDEV each drive being 4 TB (Each VDEV under Z1; This volume is approximately 90% full and contains work files, media files backup system files,Warden Jails, IOCAGE Jails and Media files for archiving.

Since Im still somewhat of a Noob Iam not sure if the splitting is a good idea and whether it will work. Below is my thoughts on how I plan on doing the pools:
Pool 1: IOCAGE Jails - 1 VDEV x 2 Drives 1 TB each Mirrored (Since this is small and loosing jails is not a catastraphe)
Pool 2: Archived Files - 1 VDEV x 4 Drives 4TB each RaidZ1 (This will have all archived file moved and then detached from system; Will only be added when Needed; As a result my reasoning for going with ZFS1
Pool 3: Work related files and remote backups of other computers around the how - 2 VDEV x 4 Drives 4TB each Raidz2
Pool 4: Media Pool; All Media 4 VDEV x 5 Drives (4tb Each) RaidZ2

My Boot USB Stick(Freenas) will upgrade to 32GB paired. 2nd stick will be mirrored.

In regards to the move I will not move from having 2 10 bay towers to having a Norco 4224 with 24 drives and the 10 bay towers will also be converted from Port Multiplier to HBA hards

Looking for input on whether this will work, Recommendations etc.
Also looking for input on how to keep history from plex media files for watched etc. Same with tautulli plugin data.

Planned Equipment Configuration:

Norco RPC4224
- SuperMicro X9SRL-F
* Intel Xeon Processor E5-2650 v2
* 96GB (6x16GB) ECC Registered DDR3 PC3-12800R 1600 MHz Server Memory PC3-12800R 1600 MHz
*
- 2 EA Intel RES2SV240
* 1st Card to handle 20 Drives of 24 (Seagate Constellation ES3 4TB Each)
* 2nd Card 4 remaining Hot swap hard drives and additional 2.5 Inch drives for the Jails and ZIL, L2ARC
o 4 Constellation Drives
o Intel DC S3700 200GB 6Gbps SATA 2.5-inch (Caches, ZIL, L2ARC)
o Pool 1 Drives 2 SSD Drives (Mirrored)
- LSI SAS 9211-8i 8-port 6Gb/s PCI HBA (Firmware for IT Mode)
- LSI Logic SAS9200-16e External Quad Port
* Used to connect 2 10 bay towers that are being converted over from Port Multiplier to JBOD

10 Bays Towers (2) Both same configuration:
- Intel RES2SV240 which will have cable run to the LSI Logic SAS9200-16e in the Norco RPC4224 Unit.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
I think you need one RAIDz2 pool up to 8 wide for data, one mirrored SSD pool for VMs, and an offsite backup system that you replicate to daily

Things work best when automated and left alone, when FreeNAS has a problem it will Email you (If set up correctly)

Things have changed boot from a SSD or Hard drive

If this have been working without Caches, ZIL, L2ARC you don't need them, first max out your RAM

Lastly NEVER put a Beta version into production, wait 3-6 months after a release before using it, computer failures can and do end otherwise successful companies

Keep in mind I am a home user

Have Fun
 

Jerzy Sobski

Explorer
Joined
Mar 6, 2015
Messages
50
I think you need one RAIDz2 pool up to 8 wide for data, one mirrored SSD pool for VMs, and an offsite backup system that you replicate to daily

Things work best when automated and left alone, when FreeNAS has a problem it will Email you (If set up correctly)

Things have changed boot from a SSD or Hard drive

If this have been working without Caches, ZIL, L2ARC you don't need them, first max out your RAM

Lastly NEVER put a Beta version into production, wait 3-6 months after a release before using it, computer failures can and do end otherwise successful companies

Keep in mind I am a home user

Have Fun

Thanks. Ill take it under consideration. In regards to the Beta, It was an oversight, When I went and upgraded to 11.2Beta1 later to Beta2. I thought it was a a release due to the word Stable being used in name description. Later I discovered that was not the case and but since had already I upgraded pool to handle latest flags I was afraid to go back to an older version without causing more damage to pools then had been done.

The breakup of pools is primarily to allow in the future to move these the various broken down new pools to other systems without having to move the entire data pool as its setup now. Since I will need to move entire pool to new drives under Raidz2, I felt this was the best time to also break the pool into groups as mentioned before.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
The breakup of pools is primarily to allow in the future to move these the various broken down new pools to other systems without having to move the entire data pool as its setup now. Since I will need to move entire pool to new drives under Raidz2, I felt this was the best time to also break the pool into groups as mentioned before.

That is how I ended up with two data pools of spinning rust, my friend moved in with an old HPE ProLiant DL580 G5 Server, we JBODed his pool to save power and now have room for expansion, in future he may move and take his drives

Breaking the pool into groups should done using datasets, I can see the benefit of having one pool per enclosure others may disagree

Have Fun
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Hmmm.

I would consider making a larger RaidZ2 pool for all your larger data. If you want to have a mirrored drive for VMs etc, then that's fine.

It makes sense to split pools when you have different vdev types, or media types, for instance an SSD pool vs an HD pool, or a pool of mirrors vs a poor of raidz2 vdevs.

With the RaidZ2, I'd suggest picking a multiple of 6,7,8 drives per vdev, and then multiple RaidZ2 vdevs.

The RES2SV240 is a fine expander, but as you know it has a limit of only 20 drives, considering that you have to have an uplink. It does have a 36 port big brother, that would allow dual uplink, and still service 28 drives. Meaning you can power all 24 norco 3.5" drives... and still have ports for an additional 4 SSDs for instance.

4x6-way RaidZ is a fine configuration, and gets good IOPs, the same number of drives can be configured as 3x8-way RaidZ2 if storage efficiency is more important.

If you do want more than 24 drives... perhaps it would make financial sense to look at a larger chassis, say a Supermicro 36 bay, or greater. rather than building multiple JBODs.

Since the bigger supermicros come with expander backplanes anyway, you might be able to run all the disks off a single HBA, instead of having to invest in multiple HBAs and 3 expanders... it might even work out cheaper, and possible more reliable and simpler to manage too.
 
Status
Not open for further replies.
Top