Check out my TrueNAS Scale Build

Ebedorian

Dabbler
Joined
May 5, 2023
Messages
10
So, I've been on here since May 2023, and I have had varying success with several iterations of a TrueNas Scale build that I've been working on. I have had issues with fan speed control (see forum post here) and drive errors (posted here), but I haven't gotten much help. I've just been doing a lot of reading.

I could use some guidance on this. This build is intended to provide archive storage for large files and run a few virtual machines. Most of the gear I used is linked in case anyone is interested in using any of it. If most of it is garbage, I'd still like to know, as this has been a learning process for me.


My Current Build includes the following:

Supermicro X11SSL-F
Intel Xeon E3-1270 v5 @ 3.6 GHz
SST-NT07-115X-USA (CPU Cooler)
64GB Micron Crucial DDR4 (4X16GB) 2133MHz UDIMM RAM
Noctua NF-A6x25 PWM x 2
Rosewill RSV-Z3100U 3U Server Chassis
Silicon Power 128GB SSD 3D NAND A55 SLC x 2 (Boot Pool)
SAMSUNG 845DC 800GB PRO MLC x 4 (Storage Pool)
WL4000GSA6472E (4 x OOS4000G and 1 x ST4000NM0033) (Storage Pool)
MCX311A-XCAT 10GB
9200-8I IT (HBA)
DUAL NVMe to PCIe
Intel OPTANE SSD P1600X Series 118GB x 2 (SLOG)
WD 1TB WD Red SA500 NAS 3D NAND Internal SSD (Cache)

My current design. The support VDEVs are intended for use with the spinning disks, not the SSDs. I'm not even sure if ZIL is necessary. This is where I could use some advice. Running Crystal Disk Mark on an iSCSI drive had putrid results, hence the reason for adding the support VDEVS. But this is where I need the advice.

1708032323281.png


Thanks again for any assistance given.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
nd drive errors (posted here), but I haven't gotten much help.
You just posted your log to an old thread, usually you will receive good help here if you provide good information.

I didn't check everything in detail, it's just at first glance to give you a starting point.
WL4000GSA6472E (4 x OOS4000G and 1 x ST4000NM0033) (Storage Pool)
MCX311A-XCAT 10GB
For the hard drives it isn't easy to come up with information, I may be totally off track here but the OOS4000G seem to be seagate barracudas and it seems they may be SMR drives.
I wouldn't put high hopes on that 10G NIC either. networking primer

Running Crystal Disk Mark on an iSCSI drive had putrid results,
Can you elaborate on that please?

Could be the NIC, could be SMR drives, could be false expectations, ...
hence the reason for adding the support VDEVS. But this is where I need the advice.
For pure archival storage, what performance do you expect?

SLOG use case you can determine for yourself if you need a SLOG device. The ZIL exists on your pool anyway, that's not a separate device. This also looks useful. I have no experience with iscsi, but for archival storage I do not think this is needed.

From the ZFS primer:
As a general rule, L2ARC should not be added to a system with less than 64 GB of RAM, and the size of an L2ARC should not exceed five times the amount of RAM. In some cases, it may be more efficient to have two separate pools: one on SSDs for active data, and another on hard drives for rarely used content.
You would not benefit from a cache vdev in my opinion.

What I would consider, depending on what you are planning to do with your VMs. If you go for striped mirrors on that SSD pool you can increase the performance there (if needed). 3x mirror vdevs is "only" 800 GB more for redudancy over your current raudz2 setup. If you are content with your performance though, don't touch that. It may be less resilient than a raidz2 setup on top of needing more parity.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
  • Your use case (archive storage for large files) doesn't apparently justify the use of L2ARC (what you called "cache") VDEV.
  • The SLOG VDEV might be useful depending on the type of VM you are using (iSCSI/block storage).
  • For iSCSI and block storage you want to use MIRROR VDEVs.
  • You do not want to use those HDDs: you want NAS/enterprise drives designed to run 24/7: examples are WD's RED PLUS/PRO and Seagate's Ironwolf/Ironwolf PRO models. If you want to take the risk in home use, look for refurbished drives but DO NOT buy USED DRIVES.
  • You do not require a boot pool in a mirrored VDEV: slap just one drive, then have a config backup regularly... if something happens, you install TN to the new drive, import your pool and your config back, and you are done; if you require HA, please read the following resource.
  • SSDs are more reliable than HDDs, you do not require the same amount of redundancy... a RAIDZ1 would serve just fine, IF that's the proper config you need. Please read the following resource.
  • It appears to me that the fan speed issue has been resolved... it's a pretty common issue, and there is a resource about it.
  • Your desired M.2 adapter card is really overpriced, you can find similar ones at 1/3 of the price.
  • If you tell us more about your use case and your expected performance we can help you. You should test your pool's performance using fio, and you can use jgreco's brun-in script to get a sample of your drives' maximum performance.
  • I strongly suggest the use of joe's multi_report script for SMART monitoring and automated config backup.
  • A reading of the following resource about 10G is also suggested.
 

Ebedorian

Dabbler
Joined
May 5, 2023
Messages
10
Wow, these responses are great. Thanks for the feedback and the links to other resources. I'll try to share some more useful information once I've got TrueNAS Scale up and running again.

I guess I did something else wrong,

1708053703239.png


I can't make heads or tails of this.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Ebedorian

Dabbler
Joined
May 5, 2023
Messages
10
Thanks, Davvo. I've reinstalled TrueNAS Scale on an older Crucial MX100 256GB SSD. It doesn't appear that either one of the pair of
Silicon Power 128GB SSD 3D NAND A55 SLC SSDs I have been using are any good. Thanks for the help guys. I'll post updates on this build here.

1708275353109.png
 

Attachments

  • CrystalDiskMark_20240218132845.png
    CrystalDiskMark_20240218132845.png
    25.4 KB · Views: 33
  • CrystalDiskMark_20240218140211.png
    CrystalDiskMark_20240218140211.png
    25.4 KB · Views: 25
Last edited:

Ebedorian

Dabbler
Joined
May 5, 2023
Messages
10
So, I am somewhat shocked by the SMB Share performance. I am using

My Current Build includes the following:

Supermicro X11SSL-F
Intel Xeon E3-1270 v5 @ 3.6 GHz
SST-NT07-115X-USA (CPU Cooler)
64GB Micron Crucial DDR4 (4X16GB) 2133MHz UDIMM RAM
Noctua NF-A6x25 PWM x 2
Rosewill RSV-Z3100U 3U Server Chassis
Crucial MX100 256GB SSD (boot drive, with 16GB swap)
SAMSUNG 845DC 800GB PRO MLC x 4 (SSD-Pool)
WL4000GSA6472E (4 x OOS4000G and 1 x ST4000NM0033) (OOS-Pool)
MCX311A-XCAT 10GB
9200-8I IT (HBA)

Is the performance due to the type of share, 10G NIC, or the fact that it was a short file transfer? I'd love to understand this a little better. Let me know if you need additional information to furnish an explanation.

P.S. The PC I ran this from is also connected to my network via a 10G NIC.
I am definitely not using swap.

1708280593547.png
 

Attachments

  • CrystalDiskMark_20240218132845.png
    CrystalDiskMark_20240218132845.png
    25.4 KB · Views: 29
  • CrystalDiskMark_20240218140211.png
    CrystalDiskMark_20240218140211.png
    25.4 KB · Views: 30
Last edited:
Top