Need Guidance on TrueNAS iSCSI SAN (Hardware & Configuration)

Joined
May 6, 2023
Messages
6
I have been looking to build centralized storage for my two virtualization hosts currently running Promox. I went ahead and swapped from ESXi/vCenter. I am wanting to get a solid foundation before I start throwing VMs on this setup. Before I dive deep into everything, there are a few things I want to point out:
TrueNAS Hardware
Server & Motherboard:
SV-6028U-TR4T+ / X10DRU-i+
Processor(s): (x2) Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
RAM: 128 GB DDR4 ECC RAM (M393A2G40DB0-CPB)
HBA: Supermicro AOC-S3008L-L8E SAS 8-Port 12Gb/s PCIe
Storage NIC: AOC-STGN-i2S Rev2.1 (10 gbit)
Storage Drives:
  • BOOT (mirrored)
    • (x2) Supermicro SSD-DM064-64GB SATADOM
  • VM Storage (mirrored vdevs w/ mirrored SLOG - iSCSI)
    • (x4) Samsung 870 EVO 2TB SSD
    • (x2) Intel Optane M.2 SSD P1600X 58GB SLOG
  • Media (RAIDZ2 - NFS)
    • (x8) Seagate Exos 4TB (4kn 12Gbps SAS) (7E8 ST4000NM0095)
I currently have the latest stable TrueNAS CORE installed on both the SATADOMs. Below are a few questions or concerns I have with this build in general; however, I am looking for really any type of advice or guidance:
  • Would NFS be more performant for my use-case and hardware?
  • Do I really need a mirrored SLOG for my use-case?
  • Write Amplification
    • Configuring ashift, block size, recordsize, etc.
    • Should I deviate from default values?
  • SPARSE volume /TRIM/UNMAP support?
    • Ideally, I would like to manage storage space within the hypervisor itself (set alarms, etc.)
    • I know VMFS 6 supported this on ESXi, but not sure about Promox
  • Is there anything I am missing or failed to take into consideration?

I seriously appreciate anyone's help. Forgive me if I have forgotten anything or placed this in the wrong topic.

Thanks!
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
  • I have no real previous experience with TrueNAS, ZFS, or iSCSI

  • I know I am running with consumer-grade SSD's
It could work. In my experience, consumer-grade SSD's suck for a hypervisor. They often are even slower than spinning rust once the write cache runs out when confronted with intense sustained IO like PBS or even VM installations sometimes, causing 30%+ IO delays.
TrueNAS Hardware
Server & Motherboard:
SV-6028U-TR4T+ / X10DRU-i+
Processor(s): (x2) Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
RAM: 128 GB DDR4 ECC RAM (M393A2G40DB0-CPB)
HBA: Supermicro AOC-S3008L-L8E SAS 8-Port 12Gb/s PCIe
Storage NIC: AOC-STGN-i2S Rev2.1 (10 gbit)
Storage Drives:
  • BOOT (mirrored)
    • (x2) Supermicro SSD-DM064-64GB SATADOM
  • VM Storage (mirrored vdevs w/ mirrored SLOG - iSCSI)
    • (x4) Samsung 870 EVO 2TB SSD
    • (x2) Intel Optane M.2 SSD P1600X 58GB SLOG
  • Media (RAIDZ2 - NFS)
    • (x8) Seagate Exos 4TB (4kn 12Gbps SAS) (7E8 ST4000NM0095)
I currently have the latest stable TrueNAS CORE installed on both the SATADOMs. Below are a few questions or concerns I have with this build in general; however, I am looking for really any type of advice or guidance:
  • Would NFS be more performant for my use-case and hardware?
Given your hardware, I think whether it be NFS or SMB, performance isn't really the deciding factor. It's probably more your use cases.
SMB and NFS both have pros and cons. Which one you'd use depends on your preferences. I'll highlight their differences:
  • ACL's are easier to do on SMB than NFS. You can do the same with NFS, but it requires V4 and Kerberos, which aren't trivial to setup.
  • NFSv3 and below doesn't really have a concept of authorization. It just maps whoever is connecting to the local user, which could easily be a privileged user (like root). This may be fine if the only user is you, but if you plan on having multi-user environment, it's not really secure.
  • NFS mounts do not cross file system boundaries. This means that every dataset you want shared needs its own exports and clients have to mount them separately. This means that you can't just have 1 root share and get access to all shares, which can be really tedious. SMB, on the other hand, has no problem doing this.
 
Joined
May 6, 2023
Messages
6
@Whattteva, thanks for the response.

It could work. In my experience, consumer-grade SSD's suck for a hypervisor. They often are even slower than spinning rust once the write cache runs out when confronted with intense sustained IO like PBS or even VM installations sometimes, causing 30%+ IO delays.

For my current use-case, I would only have around 10-15 VM's running that are mostly idle. That being said, I don't think this should be an issue; however, I do understand that I am just going to have to run some tests and see what actually happens.

Given your hardware, I think whether it be NFS or SMB, performance isn't really the deciding factor. It's probably more your use cases.

To clarify a bit, I would be running iSCSI (unless there was some improvement if using NFS) for the VM pool, while using NFS for my Media pool. Both would be attached to my hypervisors as datastores. The reason the Media pool is using NFS is because that's how I previously configured it and added/removed files.

Again, I appreciate the help.
 

Lix

Dabbler
Joined
Apr 20, 2015
Messages
27
I find NFS for VM-storage simpler to manage with Proxmox. How heavy is your VM-workload, ref consumer SSD’s?

Media sharing, what ever hits your goals, laid out well in the post above.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Be sure to read the block storage guide.

 
Joined
May 6, 2023
Messages
6
I find NFS for VM-storage simpler to manage with Proxmox. How heavy is your VM-workload, ref consumer SSD’s?

Media sharing, what ever hits your goals, laid out well in the post above.

I don't believe my VM workload is really heavy. I've got docker containers for things like DNS, Bitwarden, a few games servers, home assistant, etc. I will be running plex, but it's mostly read-only and will be using the Media pool for storage (just the media files). Maybe I am overestimating?
 
Joined
May 6, 2023
Messages
6
Be sure to read the block storage guide.


Yes, I have. I mentioned it in my original post, but had a typo. It is a really good post and I appreciate the thought that went into it. Thanks!

 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
A few things in sparse order, but others already gave you great advice:​
  • I don't think you need mirrored SLOGs, but it depends on performance more than reliability.​
  • At the very least for your media pool you should totally deviate from default values (set recordsize to 1M assuming large video files)... ashift should be correctly handed by ZFS without user input.​
  • Overly trimming your SSDs will chip away at their endurance, consider disabling auto-trim and whether to set recurrent trims (either via sysctl or via cronjob).​
  • Do NOT consider at all using dedup until fast dedup comes out.​
 
Last edited:
Joined
May 6, 2023
Messages
6
I don't think you need mirrored SLOGs.

Great! I thought as much, but wanted to make sure I wasn't missing anything.

At the very least for your media pool you should totally deviate from default values (set recordsize to 1M assuming large video files)... ashift should be correctly handed by ZFS without user input.

This is good to know. I am more concerned about the iSCSI pool, but this should help!

Overly trimming your SSDs will chip away at their endurance, consider disabling auto-trim and whether to set recurrent trims (either via sysctl or via cronjob).

I am pretty sure auto-trim is disabled by default (or at least for me), but will make sure this is something I check.

Do NOT consider using dedup until fast dedup cones out.

I've been seeing a lot of issues with dedup and honestly, I never really intended to use it, as disk space is pretty fairly cheap.

Thanks,

I appreciate the response!
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
This is good to know. I am more concerned about the iSCSI pool, but this should help!
Default should be good, but you can read the following thread: it answers a few of your questions.
 
Top