Hello,
please forgive if some questions are trivial, but I just want to make sure I am going the right way:
My current setup revolves around TrueNAS and X11-based server, which holds two NVMEs which I use as ESXi datastores (no RAID, just simple storages). Note: home setup. TrueNAS is virtualized as a VM, and accesses 8 disks via an HBA directly assigned. Current HBA is LSI 9211-8i.
Current drives are WD Red and some older Seagates combined, I only wanted to test with TrueNAS at first, as I have an older 8-bay Synology which has been serving me a long time well (since 2014). TrueNAS has been fairly stable, I had 1-2 troubles with it, and that is 1-2 troubles more than I had with Synology (which is like *none*). However:
I need lots of storage for general use, and only part of that needs to be backed up. Synology is fine for that. What I need is a performing expandability. And ZFS is a good idea I reckon. Besides, I can always upgrade the hardware, which isn't possible with Synology, I have to buy the whole bay. Besides the server performs way better.
Now, to make room for future upgrades, I would go this way:
I would buy a big 4U storage case like Fantec SRC-4240X07. Can fit 24 drives, has decent cooling. And I would need 2 additional HBAs (or an SAS expander).
Now let's get into ZFS:
Currently I have a RAIDZ2 with 8 drives (single VDEV), which I can still reconfigure. Meaning: I can still move the data somewhere else, and do with it whatever I want.
Configuration is 8x2TB, which with RAIDZ2 gives 8TB of free space. I do wonder however, online calculators tell me I should have 10 of free space, yet I only have 8TB? (using this calc https://wintelguy.com/zfs-calc.pl)
However if I would reconfigure it:
With expandability in mind and if understand it correctly, I can only add full VDEVs based on the original VDEV (or simply put all VDEVs must have same sized disks). If I build one VDEV with 5 drives, I can only add full VDEVs, containing of 5 drives (5 because apparently this is the minimum for RAIDZ2). So the route might be 5+5 disks each in their own VDEV, and then adding 5 disks in 3rd and 5 disks in 4th VDEV. However, my understanding is that this is not an optimal setup when it comes to performance and space availability? The more VDEVs I have, to less free space I have for the same amount of disks. Is this correct?
With 5 disks in VDEV, I only get 50% of space availability, so out of 5x2TB, I would really only get 5TB (or most likely less on TrueNAS, see above). This is true also for 8x2TB disk VDEV, which I have now (and I still wonder why is that).
The only way to get "more" (or most?) out of a single VDEV would be going 10x4TB disks, which should get me 26TB of space. Is this correct? 4TB are best value I think currently, but I would needs the case and 2nd HBA (or an expander), to be able to test it out.
In future steps, I would then sooner or later want to upgrade this array of 20 disks. Can I simply upgrade single disks in a VDEV? Say I have a single ZPOOL over 2 VDEVs (10x2TB and 10x4TB). Can I go ahead and replace a disk after disk in either VDEV with larger disks? (thus increasing the size of the whole ZPOOL?)
Or would you rather advise me to build two separate ZPOOLs, each having 10TB disks, disks having different sizes in each VDEV (but same size per VDEV)?
To be honest, I always liked the idea of one big storage rather than more smaller ones. But I could live with two ZPOOLs.
Is there a performance increase/penalty when using one big ZPOOL as compared to two smaller ones?
And finally, would you rather buy two additional HBAs, like Intel M1115 (each going for about €100) or an SAS expander, which is some €120 on ebay (Intel RES2SV240)?
Finally, caching: I read and looked at some videos about write and read cache, and just a simple question: does it make sense to add two single 256GB SATA SSDs, connect them to the mainboard directly and (hopefully be able to) add them to the TrueNAS. Would that improve performance, especially writes?
I hope this covers all my questions :) Sorry for the uber-long post.
Thank you.
please forgive if some questions are trivial, but I just want to make sure I am going the right way:
My current setup revolves around TrueNAS and X11-based server, which holds two NVMEs which I use as ESXi datastores (no RAID, just simple storages). Note: home setup. TrueNAS is virtualized as a VM, and accesses 8 disks via an HBA directly assigned. Current HBA is LSI 9211-8i.
Current drives are WD Red and some older Seagates combined, I only wanted to test with TrueNAS at first, as I have an older 8-bay Synology which has been serving me a long time well (since 2014). TrueNAS has been fairly stable, I had 1-2 troubles with it, and that is 1-2 troubles more than I had with Synology (which is like *none*). However:
I need lots of storage for general use, and only part of that needs to be backed up. Synology is fine for that. What I need is a performing expandability. And ZFS is a good idea I reckon. Besides, I can always upgrade the hardware, which isn't possible with Synology, I have to buy the whole bay. Besides the server performs way better.
Now, to make room for future upgrades, I would go this way:
I would buy a big 4U storage case like Fantec SRC-4240X07. Can fit 24 drives, has decent cooling. And I would need 2 additional HBAs (or an SAS expander).
Now let's get into ZFS:
Currently I have a RAIDZ2 with 8 drives (single VDEV), which I can still reconfigure. Meaning: I can still move the data somewhere else, and do with it whatever I want.
Configuration is 8x2TB, which with RAIDZ2 gives 8TB of free space. I do wonder however, online calculators tell me I should have 10 of free space, yet I only have 8TB? (using this calc https://wintelguy.com/zfs-calc.pl)
However if I would reconfigure it:
With expandability in mind and if understand it correctly, I can only add full VDEVs based on the original VDEV (or simply put all VDEVs must have same sized disks). If I build one VDEV with 5 drives, I can only add full VDEVs, containing of 5 drives (5 because apparently this is the minimum for RAIDZ2). So the route might be 5+5 disks each in their own VDEV, and then adding 5 disks in 3rd and 5 disks in 4th VDEV. However, my understanding is that this is not an optimal setup when it comes to performance and space availability? The more VDEVs I have, to less free space I have for the same amount of disks. Is this correct?
With 5 disks in VDEV, I only get 50% of space availability, so out of 5x2TB, I would really only get 5TB (or most likely less on TrueNAS, see above). This is true also for 8x2TB disk VDEV, which I have now (and I still wonder why is that).
The only way to get "more" (or most?) out of a single VDEV would be going 10x4TB disks, which should get me 26TB of space. Is this correct? 4TB are best value I think currently, but I would needs the case and 2nd HBA (or an expander), to be able to test it out.
In future steps, I would then sooner or later want to upgrade this array of 20 disks. Can I simply upgrade single disks in a VDEV? Say I have a single ZPOOL over 2 VDEVs (10x2TB and 10x4TB). Can I go ahead and replace a disk after disk in either VDEV with larger disks? (thus increasing the size of the whole ZPOOL?)
Or would you rather advise me to build two separate ZPOOLs, each having 10TB disks, disks having different sizes in each VDEV (but same size per VDEV)?
To be honest, I always liked the idea of one big storage rather than more smaller ones. But I could live with two ZPOOLs.
Is there a performance increase/penalty when using one big ZPOOL as compared to two smaller ones?
And finally, would you rather buy two additional HBAs, like Intel M1115 (each going for about €100) or an SAS expander, which is some €120 on ebay (Intel RES2SV240)?
Finally, caching: I read and looked at some videos about write and read cache, and just a simple question: does it make sense to add two single 256GB SATA SSDs, connect them to the mainboard directly and (hopefully be able to) add them to the TrueNAS. Would that improve performance, especially writes?
I hope this covers all my questions :) Sorry for the uber-long post.
Thank you.
Last edited: