SOLVED Swap Cache SSD, then move boot to Old SSD

mpyusko

Dabbler
Joined
Jul 5, 2019
Messages
49
1623438994561.png

E = iSCSI
G = NFS
S = SMB

I ran all 3 at the same time. I went top-down with about a second delay between starting each test. The iSCSI was first to start. The NFS was the first to finish.

1623439409011.png

all 4 NICS are iSCSI multipath.
bce0 = SMB
bce1 = NFS
You can see the NFS seemed to take I/O priority, with SMB next and iSCSI last. This may have more to do with drive activity than the NICs, but 5 of the mechanicals were at 60% peak and the 6th hit 80%, so there was still plenty of room and the NVMe drives were barely utilized.
 

mpyusko

Dabbler
Joined
Jul 5, 2019
Messages
49
Another test, loading the system and then using it.

ALL Start: 15:30

NFS End: 15:54
iSCSI End: 16:12
SMB End: 14:15

Notes: Drives C, D and E are all iSCSI. I increased the test to 64 GB because I wanted to ensure they were duly affecting each other. I then proceeded to load GIMP (installed on Drive C since it is a I/O intensive program start. It loaded slowly (about HD speed) and prompted for an update. I opend Firefox, downloaded the update (to drive D) and installed GIMP to Drive C again. Malwarebytes was also running a full system scan (Drive C, D, E, as G was just created for the benchmark.) Malwarebytes is still scanning..... LOL No errors were presented during the process.

1623443095720.png
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
I appreciate the indulgence of my academic curiosity.

iSCSI seems to have taken the worst of it, and that's your "running VM disks" on that, so you'll "feel" it the most - although the back-end vdevs is still a RAIDZ2 so your disks are probably very busy trying to keep up.
 

mpyusko

Dabbler
Joined
Jul 5, 2019
Messages
49
I appreciate the indulgence of my academic curiosity.
NP. I had the base 'control' data, so why not experiment?
iSCSI seems to have taken the worst of it, and that's your "running VM disks" on that, so you'll "feel" it the most - although the back-end vdevs is still a RAIDZ2 so your disks are probably very busy trying to keep up.
Yeah... it was a bit of a dog, but the mechanicals bidn't seem to exceed 80% This array is 6 WD Red 3TB's (CMR, 5400 RPM) so getting the performance I am is genuinely respectable.

The biggest takeaway is while there was substantial load on the system, utilizing the various data protocols, there were no actual timeouts or errors. This might be related to sync=always and iSCSI's 'self-healing' nature.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Good to see it holding up under the combined workload stress, although I don't know if I have the patience to deal with loading programs at HDD speed these days.

Personally I'm still hesitant to mix LAN and SAN traffic without having something in the middle (either hardware-level virtual NICs, or software a la VMware NIOC) to provide some form of traffic control. Generally I'm willing to lower the ceiling (max theoretical throughput) if it means I raise the floor (minimum guaranteed throughput) and it's served me well in the past.
 

mpyusko

Dabbler
Joined
Jul 5, 2019
Messages
49
So after a couple years of running and I was about burnt through the lifespan on my 64GB M10 Optane. That left me in the market for a replacement. I do have one already on hand, but given the low endurance, I felt it would be a waste ti swap it in. In approximately the same price category, I ran across the 118 GB P1600X. It is much faster in all specs and has a substantially higher TBW (1292 vs 365).
1679301819772.png


Of course I had to benchmark it. (System specs are in my profile) Left column is without SLOG, Right Column With.
First a bhyve running natively on TrueNAS:

Native on Cygnus.png



Second, a XCP-ng VM running with quad-link GigE iSCSI:
Xen on Vincent.png


From a VM over iSCSI standpoint, the performance appears nearly identical to the previous M10. But the major difference would be with a 10 gig link. The p1600x has a peak write speed theoretically capable of handling sustained writes of a 10 Gig link.
 
Top