IceBoosteR
Guru
- Joined
- Sep 27, 2016
- Messages
- 503
Hello out there,
I want to upgrade my main freenas server. So lets start with what I have now.
I have a Dell T20, which has a Xeon E3-1225v3, 24GB of DDR3 ECC RAM and 4x4TB WD Red Drives. I also have a working backup on a Synology box, also with ECC.
With this pool, I store everything on it. Media, photos, documents,... This one is in RAIDZ-1. You can read this, why I choose it (https://serverfault.com/questions/634197/zfs-is-raidz-1-really-that-bad)
But I observed that I run out of space, and it takes forever to move through my photo folder. I have a loooot of photos, stored in JPG and RAW, as I have a DSLR and the rest of the story is clear ;)
So I would also like to fasten things up here.
First, I will move to a bigger case, the Nanoxia Deep Silence 6 Rev. B would fit my needs. I will also need another PSU, something like the new Seasonic Focus Plus Platinum 550W ATX 2.4 (SSR-550PX). I know, the G-Series of Seasonic was often recommended. I found a company in germany which also makes adapters, so the properatary Dell to normal ATX.
I am sure, this all should be fine, especially for drive temperatures.
So coming to the pool. First of all, a new HBA will be required in IT mode (IBM M1015) or something similar. Disks should be some WD Reds again, I like them.
So for the pool I have some options I thought about, and yes, I know RAIDZ-2 is recommended.
1.
Buy 6x8TB or 6x6TB drives for a new pool.
Migrate everything to the pool using ZFS send and recieve (I defenetly need to read some docs before doing this (syntax), and destroy the old pool. With those drives, built another pool with mirrored vdevs, for photos and zvols for my ESXi servers. On the other pool, media will placed. I have then a little bit more safety in disks failure if 1 drive per vdev dies (hopefully never), but would I expect more IOPS here? So noticeable more? But I can easily upgrade the pool with ore vdevs.
1.1 Maybe adding L2ARC to this pool would be an option.
2.
Buy a lot more 4TB drives, destory the pool, make a new one with eg. 9x4TB in RAIDZ-2 and rebuilt from backup
2.1 Add L2ARC
3.
Adding another vdev with 4x4TB in RAID-Z1 to the existing pool (refering to the link I have posted. In case of a URE I can ask ZFS which file is affected and recreate from backup). This is the cheapest solution of all.
4.
Maybe you got some ideas.
I don't want to put tooo much mony on this, especially when most of this is for media, which is not that important and I have a backup...
And I am not stupid, I know that the extra level of security cost some cash, so I think I can hear the people telling me to choose option 1 ;)
And thats fine, because it's maybe "the best".
Any comment to this, with you thoughts would be nice. Also please consider about the power consumption of more drives. I have seen 12TB drives out in the darkness of the internet :D
Questions to this, because of bad english -> please ask.
Regards
Ice
I want to upgrade my main freenas server. So lets start with what I have now.
I have a Dell T20, which has a Xeon E3-1225v3, 24GB of DDR3 ECC RAM and 4x4TB WD Red Drives. I also have a working backup on a Synology box, also with ECC.
With this pool, I store everything on it. Media, photos, documents,... This one is in RAIDZ-1. You can read this, why I choose it (https://serverfault.com/questions/634197/zfs-is-raidz-1-really-that-bad)
But I observed that I run out of space, and it takes forever to move through my photo folder. I have a loooot of photos, stored in JPG and RAW, as I have a DSLR and the rest of the story is clear ;)
So I would also like to fasten things up here.
First, I will move to a bigger case, the Nanoxia Deep Silence 6 Rev. B would fit my needs. I will also need another PSU, something like the new Seasonic Focus Plus Platinum 550W ATX 2.4 (SSR-550PX). I know, the G-Series of Seasonic was often recommended. I found a company in germany which also makes adapters, so the properatary Dell to normal ATX.
I am sure, this all should be fine, especially for drive temperatures.
So coming to the pool. First of all, a new HBA will be required in IT mode (IBM M1015) or something similar. Disks should be some WD Reds again, I like them.
So for the pool I have some options I thought about, and yes, I know RAIDZ-2 is recommended.
1.
Buy 6x8TB or 6x6TB drives for a new pool.
Migrate everything to the pool using ZFS send and recieve (I defenetly need to read some docs before doing this (syntax), and destroy the old pool. With those drives, built another pool with mirrored vdevs, for photos and zvols for my ESXi servers. On the other pool, media will placed. I have then a little bit more safety in disks failure if 1 drive per vdev dies (hopefully never), but would I expect more IOPS here? So noticeable more? But I can easily upgrade the pool with ore vdevs.
1.1 Maybe adding L2ARC to this pool would be an option.
2.
Buy a lot more 4TB drives, destory the pool, make a new one with eg. 9x4TB in RAIDZ-2 and rebuilt from backup
2.1 Add L2ARC
3.
Adding another vdev with 4x4TB in RAID-Z1 to the existing pool (refering to the link I have posted. In case of a URE I can ask ZFS which file is affected and recreate from backup). This is the cheapest solution of all.
4.
Maybe you got some ideas.
I don't want to put tooo much mony on this, especially when most of this is for media, which is not that important and I have a backup...
And I am not stupid, I know that the extra level of security cost some cash, so I think I can hear the people telling me to choose option 1 ;)
And thats fine, because it's maybe "the best".
Any comment to this, with you thoughts would be nice. Also please consider about the power consumption of more drives. I have seen 12TB drives out in the darkness of the internet :D
Questions to this, because of bad english -> please ask.
Regards
Ice
Last edited: