High Latency on SSD SLOG

Status
Not open for further replies.

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9
Formatting Error with Java. See below.

I really did manage to finally get it posted! So below a few posts, PLEASE.
 
Last edited:

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
Difficulties with the forum operation?
 

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9
Difficulties with the forum operation?

YES! I keep getting a popup that's preventing me from posting anything meaningful. JavaScript error.


Edit for error:
The following error occurred
The server responded with an error. The error message is in the JavaScript console.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
Admittedly I never used anything else than Firefox to access this forum. JavaScript is not Java, by the way. Hopefully someone else can help.
 

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9
Nope, not even Firefox is letting me post it. This is nuts.

OK, let's see if I can attach a TXT with the message. I think I've killed off viewership though which is a shame.

NOPE. Says 7KB is too large of a file to post.
 

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9
Hello, all.

I've been a lurker for a while, 1st build, and 1st post.

NAS Setup:
FreeNAS 11.0-U2
HP ProLiant DL360p Gen8
2x E5-2620 @ 2.00GHz
192 GB DDR3
HBA: Dell H310 6Gbps SAS HBA LSI 9211-8i P20 IT Mode
NIC: Dell Intel CPU-E15729 MY-0RN219 10Gbit
6x 600GB HGST HUC109060CSS600 - 2.5" 10k SAS disk with 512 sector size limit
2x 100GB HGST HUSSL4010BSS600 - 2.5" SSD SAS disk 512e (I think)
 

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9
The VOL I'm using was created with the GUI, a 3x(deep)2x(wide) with total usable space of ~1.6TiB. I figured it would be the fastest solution in terms of IOPS. The SSDs were set up as mirrored SLOG. Lz4 compression, no dedupe.

My 2 hosts are ESX 6.0, same model and specs as the NAS, only they each have 300GB SAS in RAID1 to host the OS. No, I'm not doing network boot for them. These local disks house none of the VMs.

Everything's connected through 10Gbs Brocade 8000b. iSCSI used and working, MTU default - no Jumbo frames or other tuning done.

Sync=Always

Da0-5 are the 600GB 10k. Da6-7 are the SSD SLOG. Da8 is the USB drive hosting the OS.
 

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9
Geom name: da6
Providers:

1. Name: da6
Mediasize: 100030242816 (93G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e3
descr: HITACHI HUSRL401 CLAR100
lunid: 5000cca0132c8aec
ident: XSVTHDJA
rotationrate: 0
fwsectors: 63
fwheads: 255
Geom name: da0

Providers:

1. Name: da0
Mediasize: 600000000000 (559G)
Sectorsize: 512
Mode: r1w1e3
descr: HITACHI HUC10906 NEO600
lunid: 5000cca070257460
ident: W7GNLKBX
rotationrate: 10020
fwsectors: 63
fwheads: 255
 

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9
Those are just 2 disks for example.

Now, here's my 'issue'. Da6-7 are ALWAYS showing high latency in the GUI reporting.
https://photos.app.goo.gl/eqoPKTMpk7eaIGjz1

Is the latency REALLY that high, or am I getting confused because it's measuring microseconds and not milliseconds?

Here's a copy CrystalDisk Mark with all the default settings.

Sync=Always
Code:
   Sequential Read (Q= 32,T= 1) :   322.537 MB/s
  Sequential Write (Q= 32,T= 1) :   330.355 MB/s
  Random Read 4KiB (Q=  8,T= 8) :   233.414 MB/s [  56985.8 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :	65.455 MB/s [  15980.2 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :   134.048 MB/s [  32726.6 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :	65.787 MB/s [  16061.3 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :	 9.333 MB/s [   2278.6 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :	 5.518 MB/s [   1347.2 IOPS]

  Test : 1024 MiB [C: 21.5% (17.1/79.7 GiB)] (x3)  [Interval=5 sec]
  Date : 2018/04/03 15:24:01
	OS : Windows Server 2012 R2 Datacenter (Full installation) [6.3 Build 9600] (x64)
 


Sync=Disabled (only for comparison sake!!!)
Code:
   Sequential Read (Q= 32,T= 1) :   329.801 MB/s
  Sequential Write (Q= 32,T= 1) :  1111.748 MB/s
  Random Read 4KiB (Q=  8,T= 8) :   234.951 MB/s [  57361.1 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :   244.795 MB/s [  59764.4 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :   134.783 MB/s [  32906.0 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :   185.825 MB/s [  45367.4 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :	 8.929 MB/s [   2179.9 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :	12.192 MB/s [   2976.6 IOPS]

  Test : 1024 MiB [C: 21.5% (17.1/79.7 GiB)] (x3)  [Interval=5 sec]
  Date : 2018/04/03 15:36:33


The screen shot of all the disks from above is when I was running the benchmark with sync=always. I checked on it with Sync=Disabled and you could tell the SLOGs were being tasked with nothing.

Is this right? I know that sync=always is a performance hit, but a pair of high performance SSDs in SLOG should keep it from tanking, no? How can I remove them (without destroying my data!) and test Sync=always without them? What commands do you need me to run to provide you with better output?

Thanks, all!
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
Is the latency REALLY that high, or am I getting confused because it's measuring microseconds and not milliseconds?

Microseconds are reported as milli-milliseconds in this case.
 

CombatChris

Cadet
Joined
Apr 3, 2018
Messages
9
Microseconds are reported as milli-milliseconds in this case.

Thanks! Man, that's really confusing. So if I'm not even cracking into 1ms latency then I guess this is going to be as good as it gets aside from any other iSCSI tuning I can do or VMWare settings I can change such as queues and threads.
 
Status
Not open for further replies.
Top