10GbE NIC for FreeNAS: Intel XXV710-DA2 or Chelsio T520-SO-CR

Joined
Dec 29, 2014
Messages
1,135
Yes, it's freaking RIDICULOUS! lol. Why the hell!?
Initially I only had two hosts (My FreeNAS boxes) that had 40G NIC's. That means any testing involves both of them. Yes, the switch is involved but it isn't capable of doing an iperf (at least not to my knowledge). When I couldn't get past 25G on iperf, I thought it might be a hardware limitation of some sort. I'd have to look up my old posts on that, but it was kind of strange and one of the hosts had less memory than the other. Therr also used to be a difference in the CPU.s When I upgraded all my hosts (vacations cancelled, so money available for upgrades!) earlier this year and was still getting odd results, so decided to dig further. One of the T580 cards was only kind of working. It wouldn't always generate errors in iperf, or I would have suspected hardware much earlier. I finally decided to bite the bullet and replace one of the T580's and POOF! 39.6G throughput with iperf. My live fire testing is doing Vmotion between FreeNAS and local drives in the machines. I also got stymied by the local drives I was using. I tried doing mirrors, but was still getting worse results than old machines. When I used smaller drives in a RAID-Z2 (like the one old machine that had any usable local disk space), POOF again. I can actually see reasonably sustained traffic at ~= 28G from FreeNAS to the local host drive. Writes are 4-6G, but my spinning drives aren't all that fast. The Optane SLOG made a world of difference there. I am still intermittently playing with tunables to try and bring that up, but it meets my needs right now.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
...moving to a 10GbE data pipe will likely uncover the next slowest link in the chain [between] the NAS [array] & your computer.

The lower latency of 10GbE for tasks like rsync (where L2ARC can boost metadata throughout & significantly shorten the times req. to backed up. This metadata exchanges at pretty low rates ( ~10’s of MB/s, if that) so it’s not a question of pipe width. Perhaps due to metadata requests?

With the advent of FreeNAS12.x, special VDEVs have the potential to significantly boost system responsiveness, making 10GbE attractive even in a single-VDEV pool. Ditto persistent L2ARCs. Granted, as usual the use case is important and not every application will benefit from special VDEVs.


What kinds of throughput are you expecting to be the upper-range for 8x good SAS-2 7200rpm drives in RAIDZ-2 array..?
(Large files, (video, iso, sparse, etc) 1 task at a time)


To me?? It seems ridiculous to buy 3x say ... 6-8TB drives to TRY to get their IOPS and throughput better... that's almost what I find deals on GOOD enterprise 4TB NVMe SSDs for ... given that I stay on the hunt for them. The math just does NOT work out.

Granted, as a backup of your back..? of COURSE use cheap TBs... but for performance..? The math just isn't there.

Say you pay what 6TB HGST drives actually cost, new. $150 each... but you mirror them to improve some performance ... okay, lets say 2-mirrors. You're talking $300 per 6TB or $50 per TB.

I occasionally get NEW 4TB NVMe drives for $400. (I've paid $370 even) ... as IF there's only a factor of 2 difference! you know..? Granted, the HBA is more, and, as someone in data recovery, I do get that the reliability of drives isn't as well known yet. For that matter, I'd really like to have an LTO drives... in fact, if I could find a few people to work out a deal where we all go in on one together and mail it to the next person every month...? So, with 4 people, you do differentials every 3 months...? that'd be AWESOME. :D And I expect I could make a business out of that, here, in LA. People def. want cheap ways of doing backups to something impervious to physical drops -- where the mechanism is independent from the data.

Thoughts...?

Also, what do I need to get a FreeNAS unit working with NVMe...? Just an HBA that sees the drives that FreeBSD has in the HCL..?
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Initially I only had two hosts (My FreeNAS boxes) that had 40G NIC's. That means any testing involves both of them. Yes, the switch is involved but it isn't capable of doing an iperf (at least not to my knowledge). When I couldn't get past 25G on iperf, I thought it might be a hardware limitation of some sort. I'd have to look up my old posts on that, but it was kind of strange and one of the hosts had less memory than the other. Therr also used to be a difference in the CPU.s When I upgraded all my hosts (vacations cancelled, so money available for upgrades!) earlier this year and was still getting odd results, so decided to dig further. One of the T580 cards was only kind of working. It wouldn't always generate errors in iperf, or I would have suspected hardware much earlier. I finally decided to bite the bullet and replace one of the T580's and POOF! 39.6G throughput with iperf. My live fire testing is doing Vmotion between FreeNAS and local drives in the machines. I also got stymied by the local drives I was using. I tried doing mirrors, but was still getting worse results than old machines. When I used smaller drives in a RAID-Z2 (like the one old machine that had any usable local disk space), POOF again. I can actually see reasonably sustained traffic at ~= 28G from FreeNAS to the local host drive. Writes are 4-6G, but my spinning drives aren't all that fast. The Optane SLOG made a world of difference there. I am still intermittently playing with tunables to try and bring that up, but it meets my needs right now.

I'm assuming you're talking about an enterprise environment for the 40G stuff..? If not, what did it take (in hardware) to get the throughput..? Like, a series of disk shelves..? Or NVMe..?

Did the T580 ever work right..?
Things like Vmotion I don't get, sorry. :-o

...sustained traffic at ~28G from FreeNAS ... (was this on SFP28 ...? Or you mean after overhead from the QSFP+ ..?

... Writes are 4-6G, but my spinning drives aren't all that fast.

4-6Gb I presume..? As in, 400-500MB/s ..? or, to NVMe..? Or in iPerf (synthetic, if it's correct to call it synthetic) ..?

And if you really do mean 4-6GB/s ... from SPINNING drives, then what the hell do I have to worry about!? :) lol I'm good with 4GB/s :-D



(OFF Topic)
I just WISH i had someone like you nearby to look at my shit for a couple of hours one day. It's a home "lab" (esp. right now) ... based on physical location ... and being a very small "network" (in the strictest sense) ... but paying someone $200 ... just seems ridiculous. I'm always conflicted about the "rates" people charge. They in some respects deserve it, they are experts after all. But it cost me REAL money to have excellent data recovery hardware, and I'm always hooking people up... not to mention it takes some time to learn it. Maybe the real statement is that I focused on the wrong knowledge! lol.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
What kinds of throughput are you expecting to be the upper-range for 8x good SAS-2 7200rpm drives in RAIDZ-2 array..?
I run a single Z3 VDEV here. Large files come to my CPU at about 170MB/s on average (range: 120-190MB/s), though short bursts can be faster. Writing to the array can be faster thanks to the Optane. 300MB/s is possible, at least until that cache is full (10+GB?). Then it slows down significantly to less than 100MB/s. My host computer is using a reasonably fast SSD, so that's unlikely to be a cause re: slowdown. The ancient Myricom PCIe card on the other hand... (PCIEe 2.0 x 4), may be a bigger issue. :)

My main application is bit-rot-proof archival storage. While I appreciate performance, I balance that with my desire to keep the rig under 100W. So single Z3 VDEV with 10TB drives it is. You can always set up a fast mirror with the NVME drives as scratch though using them in a direct-attach config usually makes more sense. As for LTO, you're talking to someone who used to do reel-to-reel backups on a VAX 750 PDP 11. I'm not going back! :smile:

What backup strategy makes the most sense is very use-specific. If the business depends on said backups, then something faster like off-site disk-based arrays or even off-site FreeNAS backup may be more interesting than LTO. For example, you could get together with a limited number of folk who all agree to set aside 2-10 TB on their machines for another in the ring to use as a encrypted backup for critical data. Cost is minimal and data is available pretty quickly (sneakernet). Not the same as backblaze re: resiliency but likely faster recovery (unless your backup is also wiped out).
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
Large files at ~170MB/s (range: 120-190MB/s),
My host has decent SSD (unlikely culprit). The ancient Myricom on the other hand (PCIEe 2.0 x 4), may be a bigger issue. :)

EXACTLY why I'm upgrading my NIC! I've been getting similar speeds. (wheres the vomit emoji..?)

bit-rot-proof archival storage that's under 100W = single Z3 VDEV with 10TB drives it is.
You can always set up a fast mirror with the NVME drives as scratch though using them in a direct-attach config usually makes more sense. As for LTO, you're talking to someone who used to do reel-to-reel backups on a VAX 750 PDP 11. I'm not going back! :smile:

Dude, I know you know what LTO is. Come ON. We're talking 30TB on a SINGLE TAPE! Not 4 MEGABYTES, or whatever DEC was doing back in 1977. :tongue:



What backup strategy makes the most sense is very use-specific.

From the reference to using ... actually USING a VAX 750 ... I'm assuming you'll get the reference to Dr. Dimento ... That statement about 'backup strategy makes the most sense' make me think: Copies, copies, copies, copies... (Not poppies, poppies, poppies)...


Off-site won't work for me, as I said, I do data recovery, so my system has to not be a fiasco when time to create an array of images hosted on a local server to emulate however many disks are in the array recovered. If it's 8 drives .... I may set up my system in a T320 I have (I have units not in use right now) ... if it's more than 8, then some will just HAVE to be DD images or something ... because you cannot have precariously strewn drives situated dangerously for .... how long? Exactly. How long? I don't know. I can't know. However long it takes! You know...?

If I get a 32 drive array,
#1, its REAL MONEY.
#2, it gets cumbersome. And if it's 50TB or something ... (or ... 150TB?) it'd take a while if it doesn't have to recovery from a degraded state. Let alone the operations it has to do while degraded.

Nah, for as long as I work on and in this business ... I need local solutions.



Saturating 10GbE is the goal for now... knowing how to fix these issues (or at least knowing how to figure out how to fix them, as I can do with macs) would be a huge help.

After I figure out the 10GbE ... I need to get an SFP28 FreeNAS system working well with NVMe drives. Then, regardless of what I'm working on, I at least have the infrastructure as a platform.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I'd say I prettied that picture up...

But 3TB per HOUR can't help but be sexy. I mean...dayum.
LTO is BAD. ASS!!.png
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Allow me to quibble a little. I wholly agree that moving to a 10GbE data pipe will likely uncover the next slowest link in the chain from the data on the NAS disk to your computer.

However, in my experience, the lower latency associated with 10GbE is noticeable for tasks like rsync where a hot L2ARC can significantly boost metadata throughout and hence significantly shorten the times it takes for my NAS content to get backed up. This metadata exchanges at pretty low rates (in the 10’s of MB/s if that) so it’s not a question of pipe width. Perhaps rsync is very inefficient re how much metadata it requests?

With the advent of FreeNAS12.x, special VDEVs have the potential to significantly boost system responsiveness, making 10GbE attractive even in a single-VDEV pool. Ditto persistent L2ARCs. Granted, as usual the use case is important and not every application will benefit from special VDEVs.
I didn't mean to imply that setting up 10G networking isn't worthwhile, because it most definitely is! :smile:

I get SMB tranfers of 400-600MB/s on my 10G network. Which is great, but nowhere near line rates. The fact is, none of the real-world applications I use come close to taxing my 10G network. If only I had an all-flash pool...

You mention rsync, which I use quite a bit, too. I think you're right; a lot of what slows it down so much is all of the metadata analysis it does to determine which data have changed and therefore need to be transmitted. And it usually uses ssh, which has its own set of issues; encryption in particular. After experimenting and tweaking, I get about 800-1000Mb/s with standard rsync and roughly 2Gb/s when using modules. It's just not as fast as I'd like it to be.

I recently update my "Github repository for FreeNAS scripts, including disk burnin and rsync support" resource, referring there to a new rsync Github repository I recently created, with two utility scripts:
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I get SMB transfers of 400-600MB/s on my 10G network...If only I had an all-flash pool...

Exactly part of why I want to do it. :)

You mention rsync, which I use quite a bit, too. I think you're right; a lot of what slows it down so much is all of the metadata analysis it does to determine which data have changed and therefore need to be transmitted. And it usually uses ssh, which has its own set of issues; encryption in particular. After experimenting and tweaking, I get about 800-1000Mb/s with standard rsync and roughly 2Gb/s when using modules. It's just not as fast as I'd like it to be.

I'd still assume, bc RSync can't get the performance of the array in the first place that it's either constrained by IOPS (NVMe largely solves) or CPU ..? If it is a matter of calculating. If it's the calculating ... is there a way you can retain a changelog for RSync to use for subsequent differential or incremental..?

Also, where you say 800-1000Mb/s ... is that really Mb or MB..? And 2Gb/s or 2GB/s..?

I've wondered if there're copy or backup apps which use a hash to compare files ... so even if the name or date opened changes, it knows that it's truly the same.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I'd still assume, bc RSync can't get the performance of the array in the first place that it's either constrained by IOPS (NVMe largely solves) or CPU ..? If it is a matter of calculating. If it's the calculating ... is there a way you can retain a changelog for RSync to use for subsequent differential or incremental..?
Not that I'm aware of.

Also, where you say 800-1000Mb/s ... is that really Mb or MB..? And 2Gb/s or 2GB/s..?
'b' is bits, 'B' is bytes. I get 800-1000 megabits/sec with direct rsync calls and 2 gigabits/sec using modules. I wish it was bytes instead of bits!

I've wondered if there're copy or backup apps which use a hash to compare files ... so even if the name or date opened changes, it knows that it's truly the same.
There probably are... but you still have to calculate the hash.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
EXACTLY why I'm upgrading my NIC! I've been getting similar speeds. (wheres the vomit emoji..?)
That kind of performance is to be expected on a single-VDEV system.... want more speed... go with multiple VDEVs or a faster pool. My NAS CPU is still whiling away the days drinking tea and eating crumpets. I have yet to see it go above 10% utilization.

I doubt a faster NIC on this end will change that much and the available selection in Mac-land is limited. Also, even though the existing Myricom card in my external enclosure operates at PCi 2.0 x 4, I have yet to reach anywhere near those limits (2GB/s). The best my array has managed thus far is about 600MB/s.

If I were to buy a new NIC now, I'd likely go for the Sonnet Thunderbolt 3 system, which allows SFP+ transceiver use. The main downside over my current external PCIe enclosure is that the Thunderbolt cable is fixed-length, not detachable, etc.

Dude, I know you know what LTO is. Come ON. We're talking 30TB on a SINGLE TAPE! Not 4 MEGABYTES, or whatever DEC was doing back in 1977. :tongue:
IIRC we had something like a DEC TU45 @ 40MB per tape... to back up the ~340MB HDD that allegedly needed 50A @ 460VAC 3ph to start up (!!!). We needed multiple tapes per backup and that drive was intimidating. Took a while.

My main issue with tape is speed. For long-term archival it's fine but I imagine in many business uses something faster is needed to keep up with snapshots, etc. The other thing is, if your business depends on it, you really need two tape drives - one local and one off-site. Having a tape and no drive is a bit like a smoker with a cigarette and no lighter. So close... and yet so far. Constantly transporting tape drives is also unlikely to be helpful re: its life.

This is why I rely on Mobius 5 arrays from Oyen digital for external backup. Inexpensive, only 1 has ever let me down, and they are easy to transport because even the PS is in that enclosure. With multiple interfaces (USB 3, eSATA, fireWire 800 :smile:), these arrays are easy to attach to just about anything. I carry mine in a padded HPRC case. They even offer a non-RAID USB-C version now, but I prefer the older enclosure due to its hardware RAID option. (@jgreco likely will wag his fingers at my use of RAID5 with a JMicron SATA multiplier / Hardware RAID controller... but it has worked flawlessly as I periodically do health checks / comparisons between the NAS and the backup arrays)

Mobius 5 array rebuilds upon drive replacement happen silently and "just work". I wish there were better tools to monitor said rebuilding but between the status lights blinking and array activity, it's pretty easy to figure out when the array has finished.

Crucially, the Mobius DAS arrays offer file-level access in a DAS box that my Mac can natively interact with. No special drivers, no SOFTRAID, etc. to get in the way between me and my data. That in turn makes for faster backups as you can target what matters most first, followed by everything else. Even better, should a drive fail, it is tool-less-ly changeable. Plus the enclosure has a buzzer to tell you when a drive has failed SMART and a button to snooze said buzzer as you replace a drive.

If I get a 32 drive array,
#1, its REAL MONEY.
#2, it gets cumbersome. And if it's 50TB or something ... (or ... 150TB?) it'd take a while if it doesn't have to recovery from a degraded state. Let alone the operations it has to do while degraded.
The hardware to host large arrays is cheap on eBay, the drives not so much. Those 8xx-series chassis from Supermicro are inexpensive, offer bullet-proof, redundant power supplies, etc. However, such a system will make your electric company very happy, may stress your home / business AC system, and may drive your spouse up the wall (with noise and industrial appearance) should you host it in a common space at home.

I would consider going used on the drives or shucking external enclosures whose innards haven't been converted to SMR yet. I like goharddrive.com, delivering NAS-grade drives with a good warranty (backed by Goharddrive) and a price point that made sense to me. Add a spare or two, and the probability of the pool becoming terminally degraded becomes somewhat remote. Helium-filled drives are quiet, consume little power, and likely will last a very long time.

I use the same drive capacity in both the NAS and the backup arrays. The drive capacity in the arrays is matched to the usable capacity in the NAS (i.e. 80% of pool capacity). So a 8-drive Z3 is backed to a 5-drive RAID5. That limits the spares I have to keep.
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
I'm assuming you're talking about an enterprise environment for the 40G stuff..? If not, what did it take (in hardware) to get the throughput..? Like, a series of disk shelves..? Or NVMe..?
Take a look at my signature. It details all the stuff in my lab. I work in IT, but it is also a hobby/passion. That is why my home lab is better than a number of companies where I have worked. It keeps me off the street and out of bars during the day, and my wife tells me that is a good thing!
Did the T580 ever work right..?
I chucked the cranky one and replaced it. The replacement works great. Iperf between the two hosts is 39.4-39.6G.
Things like Vmotion I don't get, sorry
ESXi/Vcenter thing. Moving the storage of the virtual machines while they are running. There are probably more scientific/better benchmarks, but it is a good real world test case for me. I'd love to get my write speed over 10G, it really doesn't matter that much. I can Vmotion the 4 VM's I run all the time in 10-12 minutes which is more than fast enough for me. Under normal circumstances the RAM in FreeNAS buffers most of the writes, so it performs great for me.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
'b' is bits, 'B' is bytes. I get 800-1000 megabits/sec with direct rsync calls & 2 gigabits/sec using modules. I wish it were bytes...!

If I didn't know the difference, I probably wouldn't have asked. :tongue:

People can be carefree with their choice of case ... even experts when it's the exact topic discussed.
Given the topic's 10GbE ... 800-1000 is definitely ambiguous.
Had you said 8-10Gb/s ... I'd have known what you meant irrespective.

I'm really surprised at the performance though; less than Gig-E often ...? What do you attribute that to..?
(Or is that only RSync... and on average you get ~500MB/s..? I'm waiting for coffee to kick in soon! ... i hope!)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
People can be carefree with their choice of case ... even experts when it's the exact topic discussed.
Given the topic's 10GbE ... 800-1000 is definitely ambiguous.
Had you said 8-10Gb/s ... I'd have known what you meant irrespective.
True, it's easy to get the case mixed up with these units. :wink:

I thought I was very careful to use the correct case and units to convey the information unambiguously. Mb/s = Megabits per second, Gb/s = Gigabits per second, MB/s = Megabytes per second, etc. Did I err somewhere?

I could have said 0.8-1Gb/s instead of 800-1000Mb/s. But either is correct.

I'm really surprised at the performance though; less than Gig-E often ...? What do you attribute that to..?
(Or is that only RSync... and on average you get ~500MB/s..? I'm waiting for coffee to kick in soon! ... i hope!)
Quite a bit of discussion about this upthread: disk I/O becomes the bottleneck; rsync has to do quite a bit of reading and calculating before it transmits anything; etc.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
That kind of performance is to be expected on a single-VDEV system

Believe me, the whole reason I'm working on this project is bc people claim to get 800+ MB/s with 8x HD ... via single VDevs.

... or a faster pool.

If by faster pool you mean more drives..? (Verses SATA-SSD or NVMe-SSD) ...
in which case you'd have said faster media vs. pool, which I'd think means the kind of config: I.e., Mirroring VDevs...which I do intend to do, but not without first figuring out the max performance I can get from a RAIDZ2 array comprised of 8x SAS-2, 7200rpm drives ...


I have yet to see it go above 10% utilization.

I've seen posts on benchmarking when saturating 10GbE via SMB & perhaps NFS on multiple threads maxed ALL of an Atom's cores.
(I'll look for the thread: I'd NEVER have expected that, but it seems a reality. Vs. internal HD via local protocol, which's NOTHING to a CPU)

Limited selection in Mac-land: Currently using Myricom PCIe in an (external enclosure at PCi 2.0 x 4) which gets ~2GB/s

THIS is my wheelhouse. I owned a Mac repair // retail store for 7 years. :)
I have several ATTO ThunderLink boxes. Are you near LA..? Anyway, if you'd like, and after I resolve my FreeNAS issues... I can give you results. Thus far, peer-to-peer with my 2019 16" MBPr (the 4TB!! :) SSD gets 2800MB/s) ... over the ATTO with SFP+ through my D-Link switch (not enterprise level gear, obviously) to my i7-8700 Win10 machine which also has NVMe SSD (Evo 970 2TB, WD Black 1TB ... tested writes to both) ... with another ATTO PCI card (older also) ... I got about 500-600MB/s.

When I get my Chelsio card it's going in that machine ... I also just purchased another MB+CPU which has 10G-baseT integrated.
In fact, it was the ONLY motherboard I could find which didn't have a huge water block (not putting water near my $17,000 Data Recovery PCI cards)... which has: 10GbE built in, 2x TB3 built-in, WiFi built-in, USB 3.2 Built-in and ... uses Intel CPUs which have 44-48 PCIe lanes!!

Gigabyte X299X - $600 !!
i7-9800x (44 PCIe lanes) - $350 ... (The cheapest 48 lane Proc was $700! Obviously, those last 4-lanes must wait until they're less than $87.50 each!)

But, I wanted slots & lanes! Now, I can use the 2x SSD7120 x16 HBAs via FreeNAS which'll support 8x NVMe x4 U.2 SSDs! :-D


Once I receive my Chelsio, I'll give you a report on performance ... I'm assuming you're using a laptop..?
You can also just BUY a PCIe -- TB3 enclosure and throw any OS X compatible SFP+ card in it. You don't HAVE to use pre-fabbed stuff...

I've seen great deals on older gear, which you'd be much better off 'shucking' the Gen-1 SFP+ card perhaps than with HDDs! :) ... as ATTO wants over a THOUSAND (in fact, they want close to $2k for a dual TB3 to QSFP+ and like $1300 for the TB3 to SFP28 if I'm not mistaken).

The reason I'd 'shuck' an enclosure were I you, is because you can't find low-profile PCI --> TB3 enclosures for our purposes; they're all oriented to E-GPUs ...

While yes, Myricom is OS X compatible and on the down-low at that ... I've still purchased 10GbE ATTO PCI cards on the cheap. Remember, while the products are valuable, if someone auctions them or expects quick money ... they'll accept far less than a patient person would get who posts it for Buy it Now and waits their turn.

If you'd like my "insider tips" on how to get the best possible deals, we'll need to talk via PM or SMS or email... But I would say I have a much higher-rate of getting things for virtually the lowest transaction-prices ON eBay ... or being within 10% of it ... even without waiting for blind luck.


The best my array has managed thus far is about 600MB/s.

Under what circumstance..? You mean from FreeNAS over ...? NFS..? SMB...? I'd assume it was exclusively reading or writing large files ..?


Sonnet Thunderbolt 3 system (SFP+ transceiver) -- the Thunderbolt cable is fixed-length, not detachable, etc.

Again, NO REASON to do this. Picture of some Sonnett products below for which I'm constantly swapping M.2 SSDs and cables on.

Point being..? I would bet the Sonnett External PCIe to TB3 device they have uses interchangeable cables...also!

(I get that I'm talking about TB SSDs and we were talking about TB --> SFP+ Cards, but, companies tend to be consistent)...

The TB2 version I purchased in 2016 with a 256GB SSD, paid the big bucks at the time for a 2TB Evo 970 which Sonnett claimed would NOT work...
The TB3 version I purchased 18 months ago with a 1TB (they don't offer it with 2TB) and upgraded that as well...

More importantly, I swap M.2 SSDs in and out of these things like they're the prostitutes they are. :D

I'm not sure if you know this ... but you aren't supposed to even be ABLE to connect the TB3 model to TB2. NO SUCH ADAPTER EXISTS!!!
But, by disassembling it, you can certainly use the adapter ... which if the cable were "internal" or "permanently connected" would be impossible.

I went nuts trying to take 'good' (clear & which leave no question unanswered) pictures...as I couldn't find this kind of info prior to purchasing, and I searched everywhere. Granted, it was 2016, but buying a 256GB SSD for $370 was still ridiculous. But at the time there was NO WAY of getting a portable NVMe SSD. Certainly not one with a very nice heatsink. Had it in fact, been incompatible with anything larger than the 256GB SSD it came with ... it would've been a total waste. Sonnett "engineers" said that "It can't supply the necessary voltage of a larger SSD" ... (pure bullshit).

BOTH the TB2 & TB3 ver. have replaceable cables & SSD drives & have worked with EVERY NVMe SSD I've tried with them.

You can 'adapt' the TB3 version to work with a TB2 (only) computer. (which is every pre-2016 MBP, MBA, etc.)
TB3 devices with integrated cables (soldered, aka, truly integrated) ... are impossible to use with TB2 computers.
There are NO adapters with a TB3 female port to connect a TB2 cable.




I hope this info is useful to someone - it literally took hours to figure out how to get crisp images of little PCBs.
TB2-2. Sonnett TB2 - Opened.jpg
TB2-3a. Sonnett TB2 - disassembled.jpg
TB2-3b. Sonnett TB2 - disassembled.jpg

Above images are: Sonnett PCIe Thunderbolt 3 NVMe SSD (also very large images)


Below images are: Sonnett PCIe Thunderbolt 3 NVMe SSD (also very large images)

TB3-1s. Sonnett TB3 - TEXT COLOR Corrected.png
TB3-2s. Sonnett TB3 - All items.png
TB3-3s. Sonnett TB3 + HP SSD.jpg
TB3-1bs. Sonnett TB3 - TB2 + SSD.jpg





This is why I rely on Mobius 5 arrays from Oyen digital for external backup. Inexpensive, only 1 has ever let me down

I'd suggest looking in to Cinema RAID ... as they have an integrated controller if you want to just mirror the drives and use the USB 3.0 port on a router... I personally think Oyen is over-priced for what they offer ... though, I do have a couple of their 2-bay tiny RAID devices. One for external, the other is a dual M.2 (M.2 SATA) which fits in a 2.5" slot for a laptop... :) So someone with a 2012 who wants it for some reason (I get those) will use this...

I'm sure you know that it has virtually nothing to do with those controllers ... as they're all made by a few companies or at least use very few controllers. What it comes down to is whichever media you put in it.

jgreco will likely wag his fingers at my use of RAID5 via Hardware RAID controller... but it has worked flawlessly.

You actually use RAID-5..? Not Mirrored for these use-cases...? That is surprising.


I'd consider used or shucking non-SMR HDs.

The Helium drives really are a HUGE liability. There're only a couple of Data Recovery businesses in the WORLD who can disassemble those units to repair them. Even more importantly, even IF they can ... the ability to fix the Service Area (think of it like the Firmware but rather than being on an EEPROM it's on the outer most track of hard drives. Manufacturers use this region for drive initialization and Firmware (yes, on the spinning portion) for the things which vary from drive to drive. Every drive that completes manufacturing is scanned at ultra high speed for surface defects, where upon the mfr will write what's called the P-List (not the G-List) ... which is the permanent list of defects. As we (since the 90s) have not had to manually enter the bad blocks manually ... because we switched to ...... logical block addressing! :smile: Right...?

Ergo, the sequence of all the block's identities will be WRONG without the P-List, as they are not physical locations but programmatically defined.

Tools which allow the repair of this region in Hard Drives is what sometimes must be done in data recovery. This is impossible on Helium drives...as is the ability to select which heads are enabled // disabled, etc.

And I have seen a LOT of helium drives relative to the amount of business I get ... and how new the drives are in the first place.

Basically, as a general rule, I'd recommend avoiding Helium drives when you can ... or, if they're in say, a double-parity ZVol, just know that the recovery options are limited. (No procrastination) ...
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
True, it's easy to get the case mixed up with these units. :wink:

Indeed ... but I hope I didn't come off condescending or arrogant; everyone does that sometimes, by taking for granted it's obvious to others.
But no, you didn't make an error, I was just confirming because the numbers seemed 'disappointing' ... lol. :) Sorry.



I could have said 0.8-1Gb/s instead of 800-1000Mb/s. But either is correct.

Indeed again: I was just saying... basically, manufactures use Gb ... because they're trying to scam consumers in to thinking the protocol or cable-speed is what they're going to get. Amongst us friends..? We can use bytes... lol.


Disk I/O becomes the bottleneck; rsync has to do quite a bit of reading and calculating before it transmits anything; etc.

I thought we were talking generally about FreeNAS and partially about RSync. But, re: Rsync and IOPS, that does seem like the most likely reason for the ridiculous transfer rates ... and, why (again) as I can, I will be working on NVMe array... I'm already half way there ... by next year I intend to have an 8x 8TB NVMe (Samsung PM 983 drives) ... Right now I have 5x 4TB PM 983 and 2x SSD7120. I'm going to pick up a few more 4TB drives and start with an 8x 4TB array... After that, every time I see a great deal on an 8TB NVMe (7.68TB) i'll snag it, clone the 3.78TB to it, replace it as the previous 3.78TB VDev and sell the 3.78TB SSD) ... :)

The 8x 10TB array is what I want to use for cheap TBs and to cut my teeth on figuring out what causes my slowdowns for my particular uses...
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Believe me, the whole reason I'm working on this project is bc people claim to get 800+ MB/s with 8x HD ... via single VDevs.... If by faster pool you mean more drives..? (Verses SATA-SSD or NVMe-SSD) ...
My understanding that 800+MB/s performance is only possible through the use of an all-flash pool if sync writing is on. Pool speed will be a function of the underlying media, motherboard and what's installed on it, software config, etc.

I've seen posts on benchmarking when saturating 10GbE via SMB & perhaps NFS on multiple threads maxed ALL of an Atom's cores.
My understanding is that SMB and AFP are single-threaded processes and as such do not benefit from more cores as much as higher clock speed. That's why the X10SDV-2C-7TP4F would likely have been a better choice for my use case than the X10SDV-7TP4F I ended up buying.

I like ATTO but I don't like their price point. :smile: I just bought a Sonnet Solo SFP+network adapter and will see how it works. While I prefer PCIe approaches (I currently use a Myricom PCIe SFP+ adapter in a Highpoint 6661A external enclosure which is PCIe 3.0 x4 electrical). However, since the Myricom is limited to PCIe 2.0x8 that means the most I can expect between the enclosure and the card is PCIe 2.0 x4 = 2 Gigabit/s or 250MB/s. The Sonnet will allow me to tuck it under the monitor and bus-power it off the CalDigit dock for my MBPro (late 2016).

I still use AFP here because it works better for me - like supporting long file name paths, a wider set of allowable letters in file names, etc. vs SMB. Thankfully, Catalina still supports AFP even if it dropped support for 32 bit applications. Kinda weird to have Parallels here to be able to run Snow Leopard, eventually Mojave, and Windows 10.

As far as Sonnet is concerned, they are in the business of making money off the limited Mac market that remains. I fully anticipate them rebadging other people's gear but I'm really buying support, not hardware. That's why I dropped Chelsio after I discovered their awful Mac support, market segmentation (allowing newer cards to tunnel over Thunderbolt, but not older ones), etc.

I also understand why Sonnet is building those externals the way they do - it's fiddly getting a cable to connect to a PCB, so using a socket makes for a much quicker certification, etc. experience. It also makes it easier to adopt a carrier board or whatever you want to call it for multiple applications. Plus, passive Thunderbolt 3 cables are now relatively inexpensive.

As for Oyen Digital, the Mobius 5 used to sell for under $230 new, which is pretty price-competitive for a 5-bay enclosure with a integrated RAID controller. Interestingly, every external enclosure OEM seems to be moving to USB-C JBOD-only solutions, which I'm not a fan of. SoftRAID and the like are great solutions for some folk but I prefer a enclosure that will always work out of the box presenting itself to the OS as a single volume to make working with it easy. Also, I had some bad experiences with SoftRAID.

I have no intention of ever recovering a helium drive - multiple backups etc. should make all that unnecessary - but I like the lower power consumption and associated heat - it keeps all my spinning disks below 35*C. They're also sold at a price point where repair doesn't make sense - the cost is way higher than a replacement drive. But you raise a very good point for folk who aren't taking precautions re: data, backups, and so on.
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
My understanding is that SMB and AFP are single-threaded processes and as such do not benefit from more cores as much as higher clock speed.
That is my understanding as well. That is why I have stayed with E5-2637 CPU's in my FreeNAS units. Not a lot of cores (4), but just about the highest clock speed in the family.
 
Top