What is the bottleneck of gigabit?

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
I understand a 10GbE NIC + Switch is strongly recommended for getting the best out of TrueNAS, but assuming one were to only connect over gigabit, are there any features people shouldn't bother with? If the maximum throughput TrueNAS can provide is bottlenecked to around 100MB/s, do features like L2ARC become redundant?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
L2ARC become redundant
this is redundant for most people. network speed doesn't matter.

live video editing won't work well on gigabit. that's..about it really.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I understand a 10GbE NIC + Switch is strongly recommended for getting the best out of TrueNAS
Huh? I mean, sure, if GbE is a bottleneck, 10 GbE is the next logical step (don't waste your time, money, or patience with 2.5G), but where's the general recommendation to use 10 GbE?
 

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
Huh? I mean, sure, if GbE is a bottleneck, 10 GbE is the next logical step (don't waste your time, money, or patience with 2.5G), but where's the general recommendation to use 10 GbE?
I would have thought common sense dictates if you want to use a high-performance filesystem over the network, the best place to start would be a network setup that can take advantage of those speeds. A bit like with gaming PC builds; the first component you should be thinking about is the monitor, not the GPU/CPU/RAM. No point in buying an expensive RTX 4090 if your monitor can only output 1080p@60Hz.

I will not be upgrading to 10GbE any time soon, so I'm wondering if I should even bother with a lot of the performance features. Like for example, should I even bother with striping? I notice from a lot of signatures on these forums that people tend to prefer simple mirrors to striped mirrors anyway.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
should I even bother with striping?
all data in a zfs pool is striped across every vdev in the pool. this is core to how ZFS works. there is no way to "not bother with striping".
with a single mirror vdev, the data is still striped across all vdevs. there just only IS one vdev.
same as a single "Stripe" vdev, however these are very rarely used as you loose much of the advantages of zfs.

there is no reason NOT to future proof with 10Gbe, in general, but the features of a NAS function the same @ 10Gbe or 1Gbe. just one is faster.

there is much reading in your future, padawan of unknown age!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
high-performance filesystem
ZFS isn't fundamentally a high-performance (if "performance" means "speed") filesystem; it's much more concerned about robustness than speed. That's not to say it performs poorly, but performance isn't foremost among its design criteria.
I'm wondering if I should even bother with a lot of the performance features. Like for example, should I even bother with striping?
I think you're very confused about what "the performance features" are. Striping multiple vdevs isn't primarily a performance feature, although it has that effect particularly with respect to IOPS. Its more-significant purpose is that it's one of only two ways to expand a pool (the other being replacing disks in a vdev, one by one, with larger ones).
I notice from a lot of signatures on these forums that people tend to prefer simple mirrors to striped mirrors anyway.
Different use cases call for different pool layouts. My main server (in my signature) has two pools of spinners, one with four, six-disk RAIDZ2 vdevs for general storage, and one with three, two-disk mirrors that I'd been using for VM image storage. A second server I built for my parents has a single two-disk mirror; if I need to expand that, I'll add another pair of disks. But in most cases, the reason for striping the vdevs has nothing to do with performance.

For general storage with spinners, GbE isn't going to be much of a bottleneck. Now, if you set up an all-NVMe pool, that's a different story.
 

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
all data in a zfs pool is striped across every vdev in the pool. this is core to how ZFS works. there is no way to "not bother with striping".
with a single mirror vdev, the data is still striped across all vdevs. there just only IS one vdev.
same as a single "Stripe" vdev, however these are very rarely used as you loose much of the advantages of zfs.

there is no reason NOT to future proof with 10Gbe, in general, but the features of a NAS function the same @ 10Gbe or 1Gbe. just one is faster.

there is much reading in your future, padawan of unknown age!
Yeah uh, I kind of forgot that it worked that way haha. I still confuse zfs stuff with regular raid. Taking that into account, I'm guessing setting up multiple 1+1 mirror vdevs in a single pool isn't much different to RAID10?

As much as I'd like 10Gbe, I just cannot justify it right now. My (consumer) motherboard only has two PCIe x16 slots; one is occupied by an HBA card passed through to TrueNAS and the other has a quad-NIC passed through to PfSense. I don't even have room for a dGPU for transcoding and VMs. My main priority is data preservation and storage and I don't need maximum speed right now.

I know zfs has all kinds of read/write enhancing features and I was just wondering if gigabit bottlenecked them.
 

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
ZFS isn't fundamentally a high-performance (if "performance" means "speed") filesystem; it's much more concerned about robustness than speed. That's not to say it performs poorly, but performance isn't foremost among its design criteria.

I think you're very confused about what "the performance features" are. Striping multiple vdevs isn't primarily a performance feature, although it has that effect particularly with respect to IOPS. Its more-significant purpose is that it's one of only two ways to expand a pool (the other being replacing disks in a vdev, one by one, with larger ones).

Different use cases call for different pool layouts. My main server (in my signature) has two pools of spinners, one with four, six-disk RAIDZ2 vdevs for general storage, and one with three, two-disk mirrors that I'd been using for VM image storage. A second server I built for my parents has a single two-disk mirror; if I need to expand that, I'll add another pair of disks. But in most cases, the reason for striping the vdevs has nothing to do with performance.

For general storage with spinners, GbE isn't going to be much of a bottleneck. Now, if you set up an all-NVMe pool, that's a different story.
You'd be right. I was just getting mixed up between zfs and RAID10 again.
Different use cases call for different pool layouts. My main server (in my signature) has two pools of spinners, one with four, six-disk RAIDZ2 vdevs for general storage, and one with three, two-disk mirrors that I'd been using for VM image storage. A second server I built for my parents has a single two-disk mirror; if I need to expand that, I'll add another pair of disks. But in most cases, the reason for striping the vdevs has nothing to do with performance.
Hmm, you store VM images on spinners? Don't most people prefer SSDs for that?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Hmm, you store VM images on spinners? Don't most people prefer SSDs for that?
Yes, and so do I. But there were a few low-utilization (and large-size) images that it was easier to store on spinners for a while.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
1+1 mirror vdevs
this sounds like RAID terminology. ZFS uses different terminology. it's often easier leave RAID terms behind, rather than trying to translate between them, particularly because ZFS mirror and a RAID1 are still fairly different. one big difference is that RAID clones the whole drive for a rebuild, empty space and all, while ZFS knows where all its data is and only re-silvers actual data.

that said, you are correct in that a multi vdev mirror pool is similar to RAID10, as raidz1/2/3 are similar to RAID5/6 (7 doesnt exist).
multi vdev raidz would be RAID15/16, if there was a such a thing. maybe RAID1+5? i dunno. I went almost straight to ZFS.

usually you would just simply say 2/3/4way mirror. all mirror vdevs function the same way, all that changes is the amount of redundant drives.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
ZFS terminology is imho much simpler.

Suggested readings for the opener (in this order):
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
972
Another consideration on the 1 GbE vs 10 GbE decision... Do you actually need more than 1 GbE of bandwidth? There are other ways to go faster, but they're often workload/topology related, or require specialized networking gear. I have 10GbE on my NAS, I have a spinner mirror pool for VMs, a RAIDz2 for the bulk of my stuff, and a tiny SSD pool for local jails because I had the drives lying around and a SLOG didn't seem to help my workloads enough. The SSD pool might hit 500Mb/sec, but has no network shares. The other two pools are doing good to hit 250Mb/sec on reads... Between the DDR3 RAM, decade old CPU, pool layout's... Not a chance in the world My NAS is going to fill the pipe available.

So why? Because I wanted a flat but robust uncomplicated network in my home office, and used 10GbE kit is dirt cheap compared to the upcoming/unproven 2.5/5GbE retail solutions. The technology & drivers have 10+ years of soak & test in Enterprise / Telco space. My NAS may not use all of it, but my other workloads get a boost too. I get crisp/fast usable RDP, Proxmox VE can migrate VM's from host to host at 600+Mb/sec, etc... Sometimes the fat network pipe is about the fat network pipe, not individual node speed.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
10Gbe basically means i know my network is never the limiter. added bonus: now I have extra fiber!
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
..My main priority is data preservation..

Then do not virtualize TrueNAS and move to a physical system as soon as possible, or at least have backups of your data elsewhere and are using ECC ram...
 
Joined
Dec 29, 2014
Messages
1,135
Nah, I'm happy virtualizing. Should be fine with the HBA Card stubbed.
You are doing two different things that are at odds with your stated main priority of data preservation. I think the way a number of people here would put it is "should be fine until it itsn't", and the "isn't" could likely be catastrophic.
 
Joined
Jun 15, 2022
Messages
674
You are doing two different things that are at odds with your stated main priority of data preservation. I think the way a number of people here would put it is "should be fine until it itsn't", and the "isn't" could likely be catastrophic.
It's like an Australian visiting Canada in winter; a local gives a friendly heads-up on driving conditions and the response is, "I've never been in a car crash mate," not knowing about black ice and not giving two snots.
 

dgrab

Dabbler
Joined
Apr 21, 2023
Messages
26
You are doing two different things that are at odds with your stated main priority of data preservation. I think the way a number of people here would put it is "should be fine until it itsn't", and the "isn't" could likely be catastrophic.
There is a distinction to be made between casual, home-user level data preservation, and "pretending to be an important business with mission critical datastores" preservation.

Sure, building an enterpride-grade server with a $500+ supermicro motherboard, four redundant premium PSUs, ten $5000+ clustered UPSes and sextuple mirrors + PLP for every single SSD I dare to put in the server might be better for data preservation, but I just don't care enough for that.
For me, TrueNAS is just another way for me to store files over the network, except safer and more robust than my hypervisor, even when virtualized.
 
Top