Performance to 4x 3TB Seagate 7200RPM with Raid 5 Iscsi passthrough

Status
Not open for further replies.

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Can you point me to some links that talk about using SSDs to speed up ZFS? I am interested in that also.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
You were already told exactly what you should read.

You also failed to follow the rules of the forum. You have provided us with zero information about your hardware, or configuration, and want us to help you 'speed' things up....

If you are connected over gigabit network, you will saturate your network before ever crunching those hard drives.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
If you are connected over gigabit network, you will saturate your network before ever crunching those hard drives.

Got it. Yeah I am seeing that.

Setup is currently:
Asus mobo Core i3 8gb ecc <-- will be more memory in new box.
2x Intel 1GB nics in LACP bond to Cisco 3750
1x LSI 9260 with 4x 3TB Seagate 7200RPM in hardware raid 5 passthrough to Vmware (no zfs)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Couple points here:

1. You're going to need way more memory than 8GB, and probably want more than 16GB, if you want more than "homelab" level performance in VMware.

2. VMFS + parity RAID = poor performance. You want to use mirrored vdevs or your write latencies will suck.

3. LACP and iSCSI don't mix, use MPIO instead.

You'll want to look at SSDs for SLOG ("write cache") only at this stage, especially with that little RAM. Check the stickied thread at the top of this subforum about "insights into SLOG/ZIL"
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
For more performance for iSCSI, you will need lots of RAM, and set your disks up in mirrors.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Couple points here:

1. You're going to need way more memory than 8GB, and probably want more than 16GB, if you want more than "homelab" level performance in VMware.

2. VMFS + parity RAID = poor performance. You want to use mirrored vdevs or your write latencies will suck.

3. LACP and iSCSI don't mix, use MPIO instead.

You'll want to look at SSDs for SLOG ("write cache") only at this stage, especially with that little RAM. Check the stickied thread at the top of this subforum about "insights into SLOG/ZIL"

I am using LACP and MPIO. My VMware hosts each have 2 ISCSI NICs and the VMFS is set to Round Robin.
Should I still get rid of LACP?

I will probably put 32GB of ram in the production box.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I am using LACP and MPIO. My VMware hosts each have 2 ISCSI NICs and the VMFS is set to Round Robin.
Should I still get rid of LACP?

Yes, get rid of it. Link aggregation will make those two interfaces share a single IP address, meaning the iSCSI traffic will only ever go across one path. Separate them so they can't route to each other (subnets < VLANs < two physical switches) and add both IPs to the portal.

See 10.5 in the docs under iSCSI sharing for some more details on this.

I will probably put 32GB of ram in the production box.

Much better; what kind of workload will the VMs have and how many of them are you planning to run?
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Yes, get rid of it. Link aggregation will make those two interfaces share a single IP address, meaning the iSCSI traffic will only ever go across one path. Separate them so they can't route to each other (subnets < VLANs < two physical switches) and add both IPs to the portal.



Much better; what kind of workload will the VMs have and how many of them are you planning to run?

These are running my test lab at home and a small 15 user Exchange server. It will be random workloads I believe. Most likely will have 30 or so Windows 2012 VMs, sitting most of the time, until I start to hammer on them.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
The SLOG should be a SSD, correct? If so I have several SSDs ready to deployment. How do I size it? Ill read if you point me to articles.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The SLOG should be a SSD, correct? If so I have several SSDs ready to deployment. How do I size it? Ill read if you point me to articles.

Absolutely has to be an SSD, although not every one is a good one. Check the stickied thread in this subforum about whether or not the SSD is power-fail safe (generally, if it's not an Intel, it isn't) or post your list here.

Regarding sizing it, it depends on the SSD. The optimal way is setting a cap at the drive firmware level so that it reports itself as smaller using its own overprovisioning. Next best would be BIOS/controller limited, then finally carving out a smaller partition via the command-line in FreeNAS.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
PirateGhost,

You should be happy now :)

upload_2015-8-14_23-21-58.png
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
I also just built 2 brand new Supermicro 24-bay servers loaded wth 4TB HGSTs. NO HARDWARE RAID!!!!!!!!!!!!

Running like a champ! I am using LACP until I test the other methods. All these boxes are Production.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
upload_2015-8-14_23-23-29.png


I think I need to experiment.....
Gig0/0/19 and 20 are the NICs to Freenas. 0/0/2 is the uplink to my homes ASA5520.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
10gbe nics will be here friday
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
10gbe nics will be here friday

You still want to ditch LACP in favor of MPIO if you're using iSCSI though.

I mean, 10Gbps is nice, but wouldn't you rather 20Gbps without the aggregation overhead? ;)
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
LACP will be ditched when 10GB NICs arrive.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Any other tips you can provide me?

How do I put my drives in mirrors but still present them as one to ESX?
 
Last edited:

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Any other tips you can provide me?

How do I put my drives in mirrors but still present them as one to ESX?
You mirror them in freenas.

That has nothing to do with sharing them out via a protocol like iscsi
 
Status
Not open for further replies.
Top