brando56894
Wizard
- Joined
- Feb 15, 2014
- Messages
- 1,537
Since I'll probably be jumping back into FreeNAS in the coming months and I'll have the money to upgrade, I'd like to flesh out my future optimal pool setup and get some input before I drop a grand or two on drives.
My current hardware won't change, I just be adding in either 4, 6, 8, or 10 TB drives and an NVMe drive. Here's my current hardware
Server
Asus X99-WS/IPMI
Liquid cooled Xeon E5-1650 (6 cores/12 Threads @ 4.2 GHz)
2x 32 GB Samsung DDR4 ECC RAM
Liquid cooled Nvidia GeForce GTX 1070
EVGA 1KW PSU
NZXT H440 Case
7x 4 TB HDDs
I'm currently using unRAID and have two VMs setup: Windows 10 with the 1070 passed through which I use to run Kodi for my HTPC (server is in my living room and connected to my TV), which is also used for occasional (heavy) gaming, and then an Arch Linux VM which runs Nginx as a reverse proxy for the following apps: Radarr, Sonarr, Deluge, NZBget, NZB Hydra, and HTPC Manager. I also have a few Docker containers setup in unRAID: Plex, PlexPy, NextCloud, and MuxiMux (dashboard combining multiple sites under one domain), some of which also run through Nginx.
Plex isn't used that often, but I do have it shared out to about 8 people. We mostly use Kodi inside the network, and that's max two clients.
I will have a separate pool for the NVMe drive which will be mostly used for block storage (ZVOLs) for the VMs, that's an easy decision. All drives will be connected via a 4 port HBA (minus the NVMe, which will be directly connected to the motherboard).
I was previously using striped mirrors for my pool and loved the performance benefits and ease of upgradability, but the halving of storage space and limited redundancy sucks. I figured I'd go with RAIDZ2 this time around, but since I know it takes a bit of pre-planning, I don't know what the optimal configuration for my storage pool should be. I currently have 20 TB with only about 11.5 TB used, so I don't really need much more than that, the question is more is it better to have wider vdevs (I was thinking 2 of 6 drives each) or multiple vdevs so that they can be striped together (3 of 4 drives each)? I was thinking 3 vdevs of 4x 4 TB WD Red Pro drives so that they wouldn't hurt my wallet too much and smaller vdevs would be easier to upgrade incrementally. Which would benefit me more?
I would be using my existing drives in the vdevs, but would first need to create a pool with the empty drives so that I could copy the data from unRAID's JBOD array to the ZFS pool (I don't feel like downloading multiple TBs again for the umpteenth time, even though I do have a gigabit connection hahaha), luckily unRAID (unofficially) supports ZFS via a plugin so that would be a breeze.
My current hardware won't change, I just be adding in either 4, 6, 8, or 10 TB drives and an NVMe drive. Here's my current hardware
Server
Asus X99-WS/IPMI
Liquid cooled Xeon E5-1650 (6 cores/12 Threads @ 4.2 GHz)
2x 32 GB Samsung DDR4 ECC RAM
Liquid cooled Nvidia GeForce GTX 1070
EVGA 1KW PSU
NZXT H440 Case
7x 4 TB HDDs
I'm currently using unRAID and have two VMs setup: Windows 10 with the 1070 passed through which I use to run Kodi for my HTPC (server is in my living room and connected to my TV), which is also used for occasional (heavy) gaming, and then an Arch Linux VM which runs Nginx as a reverse proxy for the following apps: Radarr, Sonarr, Deluge, NZBget, NZB Hydra, and HTPC Manager. I also have a few Docker containers setup in unRAID: Plex, PlexPy, NextCloud, and MuxiMux (dashboard combining multiple sites under one domain), some of which also run through Nginx.
Plex isn't used that often, but I do have it shared out to about 8 people. We mostly use Kodi inside the network, and that's max two clients.
I will have a separate pool for the NVMe drive which will be mostly used for block storage (ZVOLs) for the VMs, that's an easy decision. All drives will be connected via a 4 port HBA (minus the NVMe, which will be directly connected to the motherboard).
I was previously using striped mirrors for my pool and loved the performance benefits and ease of upgradability, but the halving of storage space and limited redundancy sucks. I figured I'd go with RAIDZ2 this time around, but since I know it takes a bit of pre-planning, I don't know what the optimal configuration for my storage pool should be. I currently have 20 TB with only about 11.5 TB used, so I don't really need much more than that, the question is more is it better to have wider vdevs (I was thinking 2 of 6 drives each) or multiple vdevs so that they can be striped together (3 of 4 drives each)? I was thinking 3 vdevs of 4x 4 TB WD Red Pro drives so that they wouldn't hurt my wallet too much and smaller vdevs would be easier to upgrade incrementally. Which would benefit me more?
I would be using my existing drives in the vdevs, but would first need to create a pool with the empty drives so that I could copy the data from unRAID's JBOD array to the ZFS pool (I don't feel like downloading multiple TBs again for the umpteenth time, even though I do have a gigabit connection hahaha), luckily unRAID (unofficially) supports ZFS via a plugin so that would be a breeze.