NIC Selection

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Hello,

I'm deciding between T580-CR or T50-CR. In a while, i plan to move to 40GbE so i think T580 would be the one. I've a couple of questions.

- Is there any newer T6 Card for 40GbE?
- Would it be really safe/fine to use a used NIC?
- Is there a major specs/performance difference between T580-CR or T580-LP-CR or is it just the form factor? The product brief itself says that T580-CR is Ultra High Performance Dual Port4 0GbE Unified Wire Adapter whereas, for T580-LP-CR, it says High Performance, Dual Port 40 GbE Unified Wire Adapter
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
QSFP+ is a dead standard; SFP28 (25 GbE) has superseded SFP+. So, no, there are no newer 40 GbE cards, but T6 cards with QSFP28 should be able to do 40 GbE.

Nothing wrong with a used NIC. Drives and PSU are the parts which wear out.

As for the difference between "High Performance" and "Ultra High Performance", your guess is as good as mine… In any case, you'd be hard pressed to get anything close to 40 Gb/s out of your NAS (many drives in many vdevs, much preferably SSDs; wizardry with tunables; and, most importantly, many clients).
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
QSFP+ is a dead standard; SFP28 (25 GbE) has superseded SFP+
Is that for real? But isn't this a fact the SFP+ can only do 10GbE or maybe 16GbE i read somewhere?

So, no, there are no newer 40 GbE cards, but T6 cards with QSFP28 should be able to do 40 GbE.
Oh, i see. What about any new variants for the SFP+? The most common one i can find is T520 in Chelsio.

Nothing wrong with a used NIC. Drives and PSU are the parts which wear out.
Hmm. Are you sure? What about the case with a used SAS/HBA Card?

As for the difference between "High Performance" and "Ultra High Performance", your guess is as good as mine…
I guess I'm learning ;)

In any case, you'd be hard pressed to get anything close to 40 Gb/s out of your NAS (many drives in many vdevs, much preferably SSDs; wizardry with tunables; and, most importantly, many clients).
Of course, i know that but i want to have clear knowledge for better understanding. Are those two cards really differ in performance and spec wise or it is just two different form factors?

Also, i heard a lot about tunables. What are those and do they really help in achieving super speeds considering the hardware supports such speeds.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
If this card is for the same server as the HDDs we are talking about here, this machine will not even remotely be able to saturate anything in that ballpark. You need to tell us more about your overall setup and use-case, if you want the best possible support.

16 Gbps sounds like Fiber Channel more than Ethernet to me.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
If this card is for the same server as the HDDs we are talking about here, this machine will not even remotely be able to saturate anything in that ballpark. You need to tell us more about your overall setup and use-case, if you want the best possible support.

16 Gbps sounds like Fiber Channel more than Ethernet to me.
Oh, no that is a different. I'm having a 14x10TB Seagate Exos for this one which I'll be upgrade Base-T to SFP+ or maybe QSFP+
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Well, that is a bit better than 6 HDDs, but still carries a large risk of not being fast enough. Again, I can only encourage you to be more specific about what you want to do with this machine and performance requirements you have derived from that use-case.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Well, that is a bit better than 6 HDDs, but still carries a large risk of not being fast enough. Again, I can only encourage you to be more specific about what you want to do with this machine and performance requirements you have derived from that use-case.
On this machine, i have two people who will constantly edit the footage in Resolve and export back then i also want it for a Time Machine backup. Other than, normal copy/paste of data on regular intervals. I'm expecting a speed of at least 2GB/s which should be enough. As i would be limited to 1GB/s on a 10GbE, so planning to move to 40GbE setup. At some point, i want to implement cache but i guess i will create a separate thread regarding that. Please note I'm still new to NAS thingy and still experimenting.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Well, you can totally forget to achieve this performance with 10 HDDs. For video editing we are talking about random I/O, which is hugely more demanding to the disks than sequential operations. So an HDD that will achieve around 200+ MByte/s with a sequential read, will likely be less than 20 MByte/s for random access. But even with sequential access your HDDs you will not be able to reach 2 Gbyte/s in general. This is not even specific to ZFS or TrueNAS.

You need SSDs and a sufficient number of PCIe lanes. Please, as per the forum rules, specify your system in detail. I am sure it is not intentional from your end, but you are not exactly overloading us with information here :wink:.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Well, you can totally forget to achieve this performance with 10 HDDs. For video editing we are talking about random I/O, which is hugely more demanding to the disks than sequential operations. So an HDD that will achieve around 200+ MByte/s with a sequential read, will likely be less than 20 MByte/s for random access. But even with sequential access your HDDs you will not be able to reach 2 Gbyte/s in general. This is not even specific to ZFS or TrueNAS.
Oh, damn. So, that would not be possible even when i implement a cache?

You need SSDs and a sufficient number of PCIe lanes.
Approximately how many SSDs?

Please, as per the forum rules, specify your system in detail. I am sure it is not intentional from your end, but you are not exactly overloading us with information here :wink:.
OMG. This was not intentional for sure.

Here are the specs for this server

Motherboard: GIGABYTE X299 WU8
CPU: Intel Core i9 10980XE
RAM: Corsair 8x16GB DDR4 3200MHz
Storage: Seagate 14x10TB EXOS (SATA 6Gb/s onboard+LSI 9207-8i)
Network: ASUS 10GbE Base-T and the same adapter on the client computer (Connected via CAT6 Cable)
Switch: MikroTik Cloud Router Switch (CRS312-4C+8XG-RM)
PSU: Antec HCP 1300W

I hope this helps you now ;)
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Oh, i see. What about any new variants for the SFP+? The most common one i can find is T520 in Chelsio.
There are no new variants for SFP+ because it has reached end-of-road.
Now it's SFP28, QSFP28, QSFP-DD… and then it gets exotic. New cards are 25 GbE (Intel 800 series, for instance), and this is now the standard on embedded boards (Xeon D-1700/2700: Supermicro X12SDV series).

For only two clients, and notwithstanding that you'll NOT achieve this speed on the pool, you might be better going for 25 GbE (Intel XXV710 or Chelsio T6225) rather than 40 GbE. At least, SFP28 uses the same multi-mode LC patch cables than SFP+ while QSFP+ pushes you either to MPO cables, to single-mode fibre with more expensive modules, or to even more expensive modules for MMF LC.
Of course, 10 GbE is already fast enough.

Oh, damn. So, that would not be possible even when i implement a cache?
What do you mean by "cache"?
L2ARC may help for reads but nothing will help for writes. A SLOG is not a write cache, and for your workload the best setting is 'sync=never' and no SLOG. Still, after about 10 s write speed will always drop to the sustained write speed of the pool.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Thanks for the details. :smile:

If at all possible from a workflow perspective, I would suggest to forget about editing off of the NAS directly. Instead give the editing workstation a nice big NVMe SSD and a 10 Gbps NIC for transfer of raw material and the work result.

A similar approach has worked nicely for me when it comes to VMs. I run them locally on a single SSD and have regular backups and snapshots to my NAS. The backups end up on a RAIDZ2 pool, which gives enough protection for my requirements.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
There are no new variants for SFP+ because it has reached end-of-road.
Now it's SFP28, QSFP28, QSFP-DD… and then it gets exotic. New cards are 25 GbE (Intel 800 series, for instance), and this is now the standard on embedded boards (Xeon D-1700/2700: Supermicro X12SDV series).
Oh, wow. I saw the Motherboard, it definitely looks exotic ;)

For only two clients, and notwithstanding that you'll NOT achieve this speed on the pool, you might be better going for 25 GbE (Intel XXV710 or Chelsio T6225) rather than 40 GbE. At least, SFP28 uses the same multi-mode LC patch cables than SFP+ while QSFP+ pushes you either to MPO cables, to single-mode fibre with more expensive modules, or to even more expensive modules for MMF LC.
Umm, any reason to go for 25GbE instead of 40GbE QSFP+?

What do you mean by "cache"?
L2ARC may help for reads but nothing will help for writes. A SLOG is not a write cache, and for your workload the best setting is 'sync=never' and no SLOG. Still, after about 10 s write speed will always drop to the sustained write speed of the pool.
Damn shit. So, would my NAS be never that fast? I thought adding a cache drive like Intel Optane may help me to achieve it. I feel heavy now ;(
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Umm, any reason to go for 25GbE instead of 40GbE QSFP+?
SFP28 is a single 25 GbE link, and cages are supposed to be backwards compatible with SFP+ modules. SFP28 modules use the same OM4 patch cables as a typical 10 GbE installation, so 25 GbE is a relatively easy upgrade from 10 GbE by replacing NICs and modules but without tearing down the walls to pull new wires.
QSFP+ is a package of four 10 GbE links, in a larger form factor than SFP+/SFP28. Please spend some time browsing FS.com for QSFP+ modules using (a) MMF/LC, (b) MMF/MPO, and (c) SMF/LC to understand the options, and the associated degrees of financial pain.

Damn shit. So, would my NAS be never that fast? I thought adding a cache drive like Intel Optane may help me to achieve it. I feel heavy now ;(
ZFS can work wonders, but miracles are not on offer.

There are many threads here about "video editing builds", some of which have input from actual users. Those who actually edit on the NAS typically have an all-NVMe EPYC build (or QNAP NAS)—because EPYC has more lanes than Xeon Scalable. For just two users and 140 TB worth of drives (in unknown layout), economics are unlikely to work for you. But @ChrisRJ 's suggestion to have some large and fast NVMe for local editing on the clients and 10 GbE links (you already have the copper infrastructure) to the NAS for archiving the rushes and the finished work may fit.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Guys, planning to build a mini NAS with 8 bays. Working on the parts now.

I'm seeing Mellanox Connect X5 and Intel XL710 QDA2. Which one i should go for? Haven't used any of these before.

Thanks
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The advice for 10 GbE is: Chelsio, Intel, or good old Solarflare.
The first two should apply to faster speeds as well.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
The advice for 10 GbE is: Chelsio, Intel, or good old Solarflare.
The first two should apply to faster speeds as well.
I understand that and even bought the Chelsio T520 CR but the thing is it only works on Catalina and that too on one port. The driver doesn't work on Big Sur and newer. However, Mellanox and Intel Drivers are present on newer macOS system so need a new NIC. So, back to the question Mellanox X3/X4/X5 or Intel X710
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I understand that and even bought the Chelsio T520 CR but the thing is it only works on Catalina and that too on one port. The driver doesn't work on Big Sur and newer.
Hackintosh user? I've found that the Chelsio driver keeps working in Big Sur and later.

However, Mellanox and Intel Drivers are present on newer macOS system so need a new NIC. So, back to the question Mellanox X3/X4/X5 or Intel X710
Ha, that's a different question! Client-side, use whatever your desktop OS supports. Server-side, use whatever your server OS supports—which, for TrueNAS Core, should boil down to Chelsio and Intel above anything else.
You don't need to have matching NICs on both ends, only matching optics in each NIC (independently on each end).
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Hackintosh user? I've found that the Chelsio driver keeps working in Big Sur and later.
Nope. Having a real MacPro7,1.

Ha, that's a different question! Client-side, use whatever your desktop OS supports. Server-side, use whatever your server OS supports—which, for TrueNAS Core, should boil down to Chelsio and Intel above anything else.
Oh, i see.

You don't need to have matching NICs on both ends, only matching optics in each NIC (independently on each end).
Are you sure? Wouldn't it be really good to use the same NIC at both ends? Or it doesn't even matter at all?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
the thing is it only works on Catalina
Why would you ask here about NICs for your Mac?
Wouldn't it be really good to use the same NIC at both ends?
I don't see any reason at all to expect so, particularly since you wouldn't ordinarily have a NIC at both ends of an Ethernet connection--you'd have a NIC at one end, and a switch at the other. And it'd really be quite unusual for both of those to be from the same manufacturer.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Why would you ask here about NICs for your Mac?
Is that not allowed to do so? I mean my Mac has PCIe slots and I'm upgrading to my entire network to SFP+ this year. Didn't know it was against the rules to talk about the client side.

I don't see any reason at all to expect so, particularly since you wouldn't ordinarily have a NIC at both ends of an Ethernet connection--you'd have a NIC at one end, and a switch at the other. And it'd really be quite unusual for both of those to be from the same manufacturer.
What if someone uses as direct connection without involving a switch? Does it still apply the same?
 
Top