Converting from QNAP to FreeNAS ex-JBOD, advice appreciated

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
Hello all,

I was running a QNAP TS-1079-PRO (10-bay) for a number of years, and then recently the backplane died. QNAP support said they don't make them anymore, and even if they did they would not sell me one (I would have to ship the whole unit for a 10-minute fix). So, in any case I am not going to reward them with buying another one, the first one was almost $3K without disks, so it was a bit of a leap to begin with.

All is not lost, the 10-bay box does a good job keeping drives cool, so I am converting it into an external JBOD enclosure and looking at FreeNAS. Note that some of my parts are older because I am re-using things from the QNAP that are still perfectly good (CPU, RAM, etc.). While other parts are current as I added them recently from ebay or whatever. I believe in buying high as I end up getting that many more years out of things to re-use.

Here is what I have so far:
Case(s): Fractal Design Define R6 (11x 3.5 drive capacity) + QNAP box with 10 bays JBOD
Motherboard: Supermicro X9SCM-F
CPU: Xeon E3-1245 ("v0")
RAM: Currently 16GB DDR3 ECC, will upgrade to 32GB.
Boot drive: Sandisk consumer 128GB SSD, I could use two of these for mirroring in needed
HBA: Broadcom/LSI 9400-16i tri-mode 12Gig SAS HBA
SAS expander: Intel res3tv360 located in QNAP box
NIC: Intel x550 10Gig + the on-board 1Gig NICs on the M/B
Disks: 9x WD RED 3TB drives
Of course there is a ton of cabling and fans in this build that I will detail if needed.

About me / experience level:
I am an experienced (20+ years) IT engineer mostly in Windows and VMware, but I have wanted to try out FreeNAS for awhile, so this looks like my chance.

Device use cases:
- As a backup device for my primary server (Windows Sersver 2019 Storage Spaces / ReFS build)
- As a transmission server host (one of the things the QNAP did well)
- Data is primarily 1080p and 4K home video, as well as pictures, lately I am now doing some 360-degree VR video as well
- I am debating going bare-metal or ESXi w/pass-thru, because then I could also run my pFsense router on the same box
Notes:
- I have 10Gig because I want to be able to backup 16TB or so without it taking a week
- The QNAP easily saturated a 1Gig link, I didn't get a chance to test 10Gig before the back-plane died
- I have so many disk ports (21x 3.5" not including 2.5") because the backup server tends to use older disks I re-purpose, so I need more
- Server is protected by an APC UPS and my house has Tesla Powerwall storage batteries and solar, in case the question comes up

Questions:
* Would you recommend having two boot devices? (add a 2nd SSD, they are cheap)
* I currently have a bunch of 3TB disks, can ZFS deal with mixed disk sizes?
- e.g. should I just buy more 3TB disks, or can I start buying 8TB disks or something?​
- On my Windows Storage Spaces server I am using 7x "shucked" 10TB WD drives from USB enclosures​
- I imagine you could have multiple different pools or something like that, probably there is a long explanation to that​
* Would you recommend SSD cache disks?
- I am prepared to add eSSDs if needed, but I have read articles both for and against in this use case, so I am not sure​
- Any other advice or recommendations that you think this build needs?​

Thank you very much in advance

-JCL
 

lightwave

Explorer
Joined
Jun 14, 2018
Messages
68
Interesting build! Using the QNAP as a SAS/SATA enclosure really makes a lot of sense. Wonder if I can find a bricked QNAP to try something similar ...

With regard to your question about disk sizes: ZFS is somewhat limited in it's abilities to handle disks of different sizes. You can combine 3TB and 10TB drives into the same vdev, but it will cause ZFS to handle your 10TB drives as if they were only 3TB. Your best option is to create separate vdevs for the 3TB and 10TB drives (provided you have enough drives of each size to get a decent level of redundancy). The vdevs can then be combined into a pool that covers all your drives, if needed. Just remember that while vdevs add redundancy through mirroring and RAID-Z, a pool is no stronger than the weakest of the vdevs you add to it. Hence, combining vdevs into pools actually increases the risk of a complete data loss.
 

jcl123

Dabbler
Joined
Jul 6, 2019
Messages
23
Interesting build! Using the QNAP as a SAS/SATA enclosure really makes a lot of sense. Wonder if I can find a bricked QNAP to try something similar ...

Thank you, I got the idea from STH: SAS Expanders, Build Your Own JBOD DAS Enclosure and Save – Iteration 1
Actually, I was thinking of following up with other QNAP owners who are in the same situation as me, but I suppose you could also ask some of them if they wanted to sell one of their otherwise bricked boxes with a dead backplane.

Of course it could be done allot cheaper if you just go 6Gb rather than 12Gb like I did. And you could even skip the expander if you only had 8 disks and just direct connect them. The cabling does start to add up though.

With regard to your question about disk sizes: ZFS is somewhat limited in it's abilities to handle disks of different sizes. You can combine 3TB and 10TB drives into the same vdev, but it will cause ZFS to handle your 10TB drives as if they were only 3TB. Your best option is to create separate vdevs for the 3TB and 10TB drives (provided you have enough drives of each size to get a decent level of redundancy). The vdevs can then be combined into a pool that covers all your drives, if needed. Just remember that while vdevs add redundancy through mirroring and RAID-Z, a pool is no stronger than the weakest of the vdevs you add to it. Hence, combining vdevs into pools actually increases the risk of a complete data loss.

I actually wasn't thinking of mixing 3TB and 10TB disks, just saying that my other server is using 10TB disks.
What I would probably do if adding disks is if I found good deals on 4, 6, or 8TB disks, put them in there and not worry about the waste, and then upgrade the array eventually when all of them are replaced. I agree you would not want to mix vdev's with different redundancy levels within the same pool.

But really, I think the best solution to this limitation with ZFS is what I am doing, running two servers that back each other up. When it is time to upgrade, I just destroy the pool, put the new disks in or whatever and create a new pool, and re-copy the data. As I mentioned, this is one of the reasons for getting 10Gig NICs, so that it won't take forever to do a full backup. People should have a backup of their data anyways, right?

Mainly the advice I was looking for was if there is any more / other hardware I should be adding to this config, such as if I need any SSDs or if I should not bother. I suppose I can just try it first, if it isn't broken, don't fix it. I will probably still get some decent speed. The 10TB disks hum along at about 200MB/sec each.

-JCL
 
Top