- Joined
- Nov 21, 2017
- Messages
- 37
I’d love it if I could just get a quick reality check on my planned method to upgrade the storage in our main file server. We’re a photography studio mostly dealing in large quantities of 20-50MB raw and jpg image files and 5k-ish xmp files.
We are currently running a 7 wide Z2 vdev with 8TB drives in a 12 bay SuperMicro chassis. We have about 22TiB stored on there bringing us to about 66% of our space used. We have another big year coming up with a lot of new data coming in, so it’s time to expand.
I was able to snag 9 16TB Western Digital Reds on Black Friday deals for a pretty decent price so those are on the way. The goal is to use 8 of them for storage and one as a spare.
Since I only have five empty bays in the chassis, my plan was to:
1. Use four of those bays to create a four wide Z1 vdev pool with the 16TB drives
2. Replicate our 22TiB of data from the old pool to the new pool
3. Remove old drives, and add four more 16TB drives in a second Z1 vdev to stripe with the first
This should leave me with a pool with two striped four wide Z1 vdevs. Based on what I read here this should get is slightly better IOPS for all those image and XMP files but slightly less fault tolerance than an 8 wide single Z2 pool. Correct?
The alternative would be to install all 8 16TB drives at once in a single Z2 vdev and then replicate from the backup server, but I‘d expect that would take a LOT longer our our 1Gbe network than it would do to it all within the same system.
Once I’m confident that the new pool is up and running, I’m also considering adding a few mirrored SSDs as a metadata special vdev to the pool to help when browsing those directories of thousands and thousands of files.
Are there any major blind spots here that I should be aware of? Am I missing anything major?
Thanks so much for any insight you might have.
We are currently running a 7 wide Z2 vdev with 8TB drives in a 12 bay SuperMicro chassis. We have about 22TiB stored on there bringing us to about 66% of our space used. We have another big year coming up with a lot of new data coming in, so it’s time to expand.
I was able to snag 9 16TB Western Digital Reds on Black Friday deals for a pretty decent price so those are on the way. The goal is to use 8 of them for storage and one as a spare.
Since I only have five empty bays in the chassis, my plan was to:
1. Use four of those bays to create a four wide Z1 vdev pool with the 16TB drives
2. Replicate our 22TiB of data from the old pool to the new pool
3. Remove old drives, and add four more 16TB drives in a second Z1 vdev to stripe with the first
This should leave me with a pool with two striped four wide Z1 vdevs. Based on what I read here this should get is slightly better IOPS for all those image and XMP files but slightly less fault tolerance than an 8 wide single Z2 pool. Correct?
The alternative would be to install all 8 16TB drives at once in a single Z2 vdev and then replicate from the backup server, but I‘d expect that would take a LOT longer our our 1Gbe network than it would do to it all within the same system.
Once I’m confident that the new pool is up and running, I’m also considering adding a few mirrored SSDs as a metadata special vdev to the pool to help when browsing those directories of thousands and thousands of files.
Are there any major blind spots here that I should be aware of? Am I missing anything major?
Thanks so much for any insight you might have.