Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

The path to success for block storage

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,996
No, RAIDZ will not work better, and it isn't an unusual use case anyways. RAIDZ has a lot of baggage with it in the form of block size selection and parity use.

If you don't care about data integrity other than maybe to notice something amiss, just stripe all the disks.

100gigE isn't likely to be a useful thing for you. The fastest modern disks can sustain an average of about 200Mbytes/sec doing sequential access which is ballpark 2Gbits/sec, so you have a theoretical max of 14 * 2Gbps = 28Gbps. You won't be seeing that most of the time, but certainly 10GbE would be fine and 40 seems justifiable.
 

Sirius

Member
Joined
Mar 1, 2018
Messages
41
Hmm, fair enough. I'll stick with mirrors then as I can afford the wasted space for a bit of integrity.

I would have thought 100gigE would take advantage of an Optane based L2ARC, or even regular ARC assuming a system with RAM/ARC that's fast enough.

One Optane 900p will do 2500MB/s, so 4 together (assuming perfect scaling - which won't happen) would be 10000 MB/sec, so with 8 of those Optanes in a striped L2ARC that'd be a theoretical 20000MB/sec (again assuming perfect scaling), which is above the 10GB/sec theoretical a 100gigE connection can manage.

Of course, that depends on all that data fitting in to L2ARC, but 8 striped 280GB Optane would be 2.2TB of L2ARC (assuming no overprovisioning) and there's no way the working set would fill all of that. The system also has 128GB 1866mhz ECC memory in quad channel, so regular ARC should hold quite a bit of hot data too.

The only reason for not just going all flash is that it's simply too far out of my budget - ~14 7200rpm 8TB drives isn't too bad to purchase, but ~14 (or ~7 if I stripe them assuming enterprise drives) 8TB flash drives? I hate to imagine how much those would cost, even if I got them off eBay.

As for my current network, I use Mellanox ConnectX-3 cards with the 56gigE firmware tweak applied, so I've seen sequential speeds as high as 5GB/sec. I'm easily breaking past 10gig performance.

Thank you so much for your reply, I appreciate it! :)
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,996
Speaking strictly from a speed point of view, yes, ARC and L2ARC could potentially hit significantly higher than 40GbE speeds.

However, you're also talking about iSCSI here, so you have a practical limit in the number of transactions per second you can manage across the wire on a TCP circuit, and it isn't just a matter of "hey iperf3 says I got fifty gigabits" but rather that all your traffic has to traverse a complex stack of components and software.

https://www.truenas.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

I do a little bit on latency in the latter half of that post. It's a little bit bull*** for various mitigating reasons, but also still very good to contemplate as to why there are practical limits to how fast a NAS/SAN can go.

If you had multiple iSCSI clients, I'd bet that would be able to do a much better job of utilizing your resources.
 

Sirius

Member
Joined
Mar 1, 2018
Messages
41
Interesting article. My current SLOG is 2 x Optane 900p 280GB although I've considered using P4800X or 905p, but then again, write performance isn't as important to me as read.

My thought was that if I went with say, Chelsio T62100-CR cards, on both the target and initiator end, I could use either iSCSI Offload or iSER to reduce the amount of overhead between the two systems. I wouldn't really try to run 100gigE with a non-RDMA or non-offload based solution as I imagine that'd suck up CPU like crazy - not just on the Target but also on the Initiator, hence why I thought iSER or iSCSI Offload could help.

According to the manual for those cards, they support both Windows 10 Pro For Workstations and FreeBSD with iSCSI Offload support, and if I was OK with running Linux on the Target side I could use iSER on both ends. I considered Mellanox cards but they dropped iSER support on Windows and I don't think they've ever done iSCSI Offload.
 
Top