Ditch my LSI 3Ware 9650SE-8LPML for M1015?

Status
Not open for further replies.

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
My build is in my signature.... just trying to figure out where my bottleneck is and max speeds I should expect on this hardware.

So far I am getting about 150MB transfer speeds using QL2460's in target mode via ISCSI. I have played with the settings on my 9650 for the 8 drives I have going through it (created as individual exports not in jbod) and have only seen performance go down and not up from 150MB. Right now I have two zvol's setup at 1TB each and both are below 50%.

I have dug and dug through searches to find the aha! setting or tweak with what I have but it leaves me wondering if I should just get 2 M1015's and switch to Raid10 with 16 drives?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Using RAID controllers never ends well, so M1015s/LSI SAS 9211/9207/etc are the safe option

That said, iSCSI is a very tricky workload for ZFS, so adjust your expectations accordingly. Tons of resources are needed to get something really good.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I'm trying to figure out how you are getting 150MegaBytes/Second. Are you on 10GbE?
 
Joined
Apr 9, 2015
Messages
1,258
I'm trying to figure out how you are getting 150MegaBytes/Second. Are you on 10GbE?
Yeah seems a little high, normally I can top out at around 130MB/s to or from my FreeNAS but that is only with a computer that has an SSD.

Though doing iSCSI the speed will drop off the more it is used from what I have read and correct me if I am wrong but it should be treated much like VM's with as many vDev's as possible rather than a single raidZ2 for better performance?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
120MB/s is pretty much the limit of 1 Gigabit per second Ethernet.

So either the OP has 10Gig links, and is getting horrible performance, or something else is going on.
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
Hell no. You want to switch to mirror vdevs.
I thought mirror'd vdevs was the FreeNAS way to to raid 10? I guess if I was doing Raid10 on the 9650 that is what you are making sure I am not indicating here? I will keep an eye out for a deal on M1015's on Ebay and throw the 9650 in another box to replicate backups to ;) just seems like the further I get into building and tweaking this beast the less I want to look back at my receipts :eek:
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
120MB/s is pretty much the limit of 1 Gigabit per second Ethernet.

So either the OP has 10Gig links, and is getting horrible performance, or something else is going on.
I am using the "FreeNAS 9.3 FC Fibre Channel Target Mode DAS & SAN" guide to setup DAS using a 4GB Qlogix 2460 between my FreeNAS box and my workstation. I am using ISCSI right now testing with Hyper-V on my workstation but will later transition to a fiber fabric vs DAS to connect a couple blade centers for testing.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I thought mirror'd vdevs was the FreeNAS way to to raid 10? I guess if I was doing Raid10 on the 9650 that is what you are making sure I am not indicating here? I will keep an eye out for a deal on M1015's on Ebay and throw the 9650 in another box to replicate backups to ;) just seems like the further I get into building and tweaking this beast the less I want to look back at my receipts :eek:

"RAID10" implies ... RAID10. Mirrored vdevs are a ZFS thing and aren't exactly RAID10. Say what you mean to avoid confusion. https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

It's particularly problematic because controllers like the M1015 are perfectly capable of RAID10, and we have a steady stream of people who are trying to use RAID controllers for ZFS, or using M1015's in mfi mode, or whatever. You wouldn't refer to your fibre channel as ethernet, just because it's similar in function, would you?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
So to summarize, and maybe I got this wrong, but I think you are running iSCSI over Fibre Channel to a hyper-v VM and the underlying storage is hardware RAID.
I've got nothing, sorry. Start with the storage and work your way out. What will the pool provide locally performance wise? What can you push over the iSCSI network path (use iperf, if that's even possible)?
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
"RAID10" implies ... RAID10. Mirrored vdevs are a ZFS thing and aren't exactly RAID10. Say what you mean to avoid confusion. https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

It's particularly problematic because controllers like the M1015 are perfectly capable of RAID10, and we have a steady stream of people who are trying to use RAID controllers for ZFS, or using M1015's in mfi mode, or whatever. You wouldn't refer to your fibre channel as ethernet, just because it's similar in function, would you?
Good thread and I am a great example of a poster describing something incorrectly :( I will try to catch up with the lingo to make sure I am not incorrectly describing things going forward and appreciate the correction here.
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
So to summarize, and maybe I got this wrong, but I think you are running iSCSI over Fibre Channel to a hyper-v VM and the underlying storage is hardware RAID.
I've got nothing, sorry. Start with the storage and work your way out. What will the pool provide locally performance wise? What can you push over the iSCSI network path (use iperf, if that's even possible)?
I will try to run some tests locally after researching the best way to do this and how to test the network path with this configuration.

I am unclear on if I would call it a hardware raid or not but that is correct.... I have had the 9650 export the disks via JBOD which killed the performance.... then I went back to creating individual disks to present them to FreeNAS and created a RAIDZ2 volume before creating the ZFS volumes I am presenting over ISCSI.

Since I am using Hyper-V on a Windows10 box I have to create a normal NTFS disk and then drop my vhdx's on there. When I am testing speeds I am copying files from an SSD on my workstation to the NTFS disk which is my ZFS volume presented via "FC Fibre Channel Target Mode" using ISCSI. Hope I am not describing this incorrectly or unclear ...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Good thread and I am a great example of a poster describing something incorrectly :(

No harm done, the point is basically just to make sure we're accurately communicating what's actually going on. I used to spend a lot of time trying to read the tea leaves trying to figure out what people meant, but the problem with that is I can be thinking "A doesn't make any sense so he must mean B" while user actually does mean A. Plus a lot of the other readers will make different translations, or not make that translation, or just pass on the discussion.

The ones who don't take offense, and welcome to that crew, those are the guys who tend to be most successful on the forum, because they're best positioned to learn and pick up new and interesting things here. It's a great community but you kinda have to put some effort into it sometimes.
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
I was able to get two 9240-8i's (PCI-E 6Gb LSI RAID IBM M1015 46M0861) and flash them with version 20 in IT mode. Speeds I am getting now....

-----------------------------------------------------------------------
CrystalDiskMark 5.1.2 x64 (C) 2007-2016 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 174.393 MB/s
Sequential Write (Q= 32,T= 1) : 174.377 MB/s

Test : 1024 MiB [D: 71.5% (731.9/1023.9 GiB)] (x5) [Interval=5 sec]
Date : 2016/04/04 19:28:32
OS : Windows 10 Enterprise [10.0 Build 10586] (x64)

I will do some more testing to see where the bottleneck is but maybe this is max speeds I will get? Oh... I also switch back to mirror'd vdev's with 14 500GB 7200 RPM drives.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Seems like you could do a lot better than that. However, tuning is usually required after a certain point.

Have you run the autotune stuff on the FreeNAS side, by any chance?
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
I had not run auto-tune yet simply because I had come across a few posts where people had been led to disable it..... I will kick the tires on enabling auto-tune and see how it goes ;D
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I wasn't actually implying that autotune was a good idea. We've seen it cause some problems for people. But do feel free to try it and report back, we can work forward from there.
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
Ok.... so enabling auto-tune and rebooting is in fact giving me the same results in speeds (174MBps). Here is a capture of what is in my tunables after enabling auto-tune. I will keep digging/testing....
upload_2016-4-6_0-15-16.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Do me a favor and double the 2097152 values, and reboot (not strictly necessary but the easiest way to propagate the changes). If that betters your speeds, it gives us a direction to look in.
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
ok... so looks like I am seeing same speeds yet with those changes and a reboot. The only thing I am curious about is how much of a performance hit having encryption on will play into this? I don't ever see heavy CPU during my test so I never thought this could be a bottleneck.....but maybe I am wrong there.
-----------------------------------------------------------------------
CrystalDiskMark 5.1.2 x64 (C) 2007-2016 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 174.673 MB/s
Sequential Write (Q= 32,T= 1) : 175.919 MB/s


Test : 1024 MiB [F: 45.8% (366.4/799.9 GiB)] (x5) [Interval=5 sec]
Date : 2016/04/06 18:36:04
OS : Windows 10 Enterprise [10.0 Build 10586] (x64)

upload_2016-4-6_18-38-59.png
 
Status
Not open for further replies.
Top