Hi everyone,
first post here, although I've been "stealth" reading along for a while now.
So, after several years and iterations of QNAP and Synology boxes, it's now time for me to make the move to TrueNAS. The two main reasons for this are performance and an expected better value for the money spent. In addition, I finally want to make use of ZFS and ECC in terms of data integrity (checksums/self-healing) - something that hasn't been part of the storage ecosystem I had until now.
This TrueNAS system will become *the* new main network storage and supersede/combine the current of-the-shelf boxes from the mentioned manufacturers (which in turn will become one on- and one off-site backup destination, so they will be repurposed). TrueNAS will be used for:
Regarding the hardware, I've already prepared a build, though I would be more than happy if you could have a look at it and check if there are any no-gos in it which might cause minor or major issues running TrueNAS SCALE:
[ This configuration has been updated due to given feedback as well as further research from my side. Details can be found in this post. ]
A few remarks:
Last but not least I would like to hear your thoughts regarding the volume being configured on this system:
Thanks for reading so far! I'm really looking forward to your input.
first post here, although I've been "stealth" reading along for a while now.
So, after several years and iterations of QNAP and Synology boxes, it's now time for me to make the move to TrueNAS. The two main reasons for this are performance and an expected better value for the money spent. In addition, I finally want to make use of ZFS and ECC in terms of data integrity (checksums/self-healing) - something that hasn't been part of the storage ecosystem I had until now.
This TrueNAS system will become *the* new main network storage and supersede/combine the current of-the-shelf boxes from the mentioned manufacturers (which in turn will become one on- and one off-site backup destination, so they will be repurposed). TrueNAS will be used for:
- file storage (roughly 30 TB right now)
- small files and large files
- NFS and SMB
- systems/users are directly working on those shares
- iSCSI target (PVE cluster with three nodes)
Regarding the hardware, I've already prepared a build, though I would be more than happy if you could have a look at it and check if there are any no-gos in it which might cause minor or major issues running TrueNAS SCALE:
[ This configuration has been updated due to given feedback as well as further research from my side. Details can be found in this post. ]
Type | Model |
---|---|
Mainboard | Supermicro X13SCH-F (MBD-X13SCH-F-O) |
CPU | Intel Xeon E-2414, 4C/4T, 2.60-4.50GHz (BX80715E2414) |
CPU Cooler | Noctua NH-L9i-17xx |
RAM | [2x] Micron 32GB, DDR5-4800, ECC (MTC20C2085S1EC48BR) |
OS (SSD) | Samsung SSD 980 500GB, M.2 2280, PCIe 3.0 x4 (MZ-V8V500BW) |
PCIe/M.2 Add-In Card (for OS SSD) | RaidSonic IB-PCI208-HS (60830) |
Data (HDD) / Manufacturer A | [4x] WD Ultrastar DC HC560 20TB, 512e, SATA (WUH722020BLE6L4 / 0F38785) |
Data (HDD) / Manufacturer B | [4x] Toshiba Cloud-Scale Capacity MG10ACA 20TB, 512e, SATA (MG10ACA20TE) |
SLOG | [2x] Intel Optane SSD P1600X 58GB, M.2 2280, PCIe 3.0 x4 (SSDPEK1A058GA01) |
PSU | Corsair SF-L Series SF850L, 850W, SFX-L, ATX 3.0 (CP-9020245-EU) |
Case | SilverStone Case Storage CS381 V1.2, microATX (SST-CS381) |
Type | Model |
---|---|
Mainboard | Supermicro X13SCL-IF (MBD-X13SCL-IF-O) |
CPU | Intel Xeon E-2436, 6C/12T, 2.90-5.00GHz (BX80715E2436) |
RAM | [2x] Samsung 16GB, DDR5-4800, ECC (M324R2GA3BB0-CQK) |
OS (SSD) | Samsung SSD 980 500GB, M.2 2280, PCIe 3.0 x4 (MZ-V8V500BW) |
Data (HDD) | [6-8x] WD Ultrastar DC HC560 20TB, 512e, SATA (WUH722020BLE6L4 / 0F38785) |
HBA | Broadcom SAS 9300-8i, PCIe 3.0 x8 (H5-25573-00/LSI00344) |
PSU | SilverStone SFX Series ST45SF-G (Rev. 2.0), 450W, SFX (SST-ST45SF-G v2.0) |
Case | Jonsbo N3, Mini-ITX (N3 Black) |
A few remarks:
- I need to look for the proper SAS-to-SATA cables. By a quick look, I should need two SFF-8643 with a SAS connector on one end and four SATA connectors each on the other. Is this correct?
- The listed Jonsbo case is definitely prefered. Unfortunately, it's hard to get (at least here in Germany). Therefore, the SilverStone Case Storage DS380 (SST-DS380B/71062) might be an alternative, though it seems as it won't fit the HBA and all eight drive bays (since some kind of bracket needs to be replaced on the drive cage in order to fit the length of the PCIe/HBA card). Can anyone having this case confirm this?
- If so, do you have a suggetion on either a suitable (and still affordable) "half-length" HBA card or another case of that type (i.e. as small as possible for eight drives)? Hot-swap would be nice, but I could go without it if necessary.
Last but not least I would like to hear your thoughts regarding the volume being configured on this system:
- Performance is quite important (notably but not only due to the iSCSI workload).
- Of course I don't want to sacrifice availability, so RAID0 is not an option.
- I've been thinking back and forth and have come to the conclusion that instead of a RAIDZ2, I'm going for (in hopes of using the proper terminology) a two-way mirror with four vdevs.
- In other words: I would create four mirrored pairs (so in RAID terms four RAID1 groups) and span a single stripe (RAID0) over those four pairs, basically building a classic RAID10 group.
- How did I come to that conlusion? Because from what I understand I would
- eliminate additional parity calculations (RAIDZ1/2), hence getting more performance out of the volume group,
- still have a single-drive fault tolerance (even up to two in a best case scenario)
- and have faster rebuild/resilver times (simple clone instead of parity reconstruction).
- Is there anything wrong with that or something I need to consider?
Thanks for reading so far! I'm really looking forward to your input.
Last edited: