iSCSI MultiPath Esxi no out data on ix1

velocity08

Dabbler
Joined
Nov 29, 2019
Messages
33
HI Team

Here's an interesting observation wondering if anyone has seen this behavior before and/ or could explain it.

New SuperMicro FreeNas 11.3 U2 box
32 Threads
128 GB Ram
Dual 10 GB SFP+
iSCSI Multi Path
ESXi DataStore
VMware RoundRobin set in iSCSI adapter on ESXi.

When performing IO load testing for reads/ writes/ mixed loads we see in/ out data on one of the 10 GB Nic ix0 all the time which is expected.

On the second 10 GB Nic ix1 we only ever see in data Zero out data back to ESXi.

see screen shot for example
.

any ideas or theories?

""Cheers
G

Screenshot from 2020-04-12 11-15-04.png
 

velocity08

Dabbler
Joined
Nov 29, 2019
Messages
33
Can we get a little bump here ?

Just in case this was a UI issue i took the stats from the CLI

see screen shot.

any ideas would be greatly appreciated.

""Cheers
G

Screenshot from 2020-04-15 22-36-06.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Also, sorry, looking back my first message sounded snippy, wasn't meant that way. Our 3D printer is requiring attention every 1.5h as I run COVID-19 facemasks at a furious pace and I am definitely short on sleep.
 

velocity08

Dabbler
Joined
Nov 29, 2019
Messages
33
You didn't describe your system in sufficient detail, so you might also get some good insights from this:

https://www.ixsystems.com/community/threads/the-path-to-success-for-block-storage.81165/

Hard to know what you've done, but at 128GB RAM there's a good chance of success, don't forget to look at your working set size and see if you can squeeze in some L2ARC.

No need to apologise i appreciate you taking the time to reply :)

I've been seeing some conflicting metrics with RaidZ and Mirrors and i'm guessing that the testing performed may not have been extensive enough i.e. not full data just single stream testing data.

Have been asking over on the ProxMox forum about best practice recommendations for ZFS on a ProxMox server and have been getting a mixed bag of responses, does ZOL differ greatly to ZFS on FBSD in its function and design?

Have noticed they have the ability to use a special device that gets added to the pool and becomes part of the pool to help with metadata writes and reads if im not mistaken, greatly increasing performance of a RaidZ pool.

ZFS Special Device
Since version 0.8.0 ZFS supports special devices. A special device in a pool is used to store metadata, deduplication tables, and optionally small file blocks.
A special device can improve the speed of a pool consisting of slow spinning hard disks with a lot of metadata changes. For example workloads that involve creating, updating or deleting a large number of files will benefit from the presence of a special device. ZFS datasets can also be configured to store whole small files on the special device which can further improve the performance. Use fast SSDs for the special device.

any thoughts?

""Cheers
G
 
Top