Ditch my LSI 3Ware 9650SE-8LPML for M1015?

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So, wait, this is RAIDZ2? If so, that's probably just a side effect of not having a good pool design. What does gstat report for disk activity when you're running this at full tilt?

The problem with NAS is that you really have to have every single duck lined up in a row or things will be suboptimal. Weakest link in chain and all that.
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
So, wait, this is RAIDZ2? If so, that's probably just a side effect of not having a good pool design. What does gstat report for disk activity when you're running this at full tilt?

The problem with NAS is that you really have to have every single duck lined up in a row or things will be suboptimal. Weakest link in chain and all that.


I had raidz2 then got the two m1015's and switched back to mirrored vdevs with more drives so I can keep under 80% used. I think I'm at 48% now.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hmm. Okay, still, what's gstat say when under heavy load?
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
Ah yes...forgot to add I won't be able to run that till this evening. I did originally run it during testing and forget what I saw unfortunately.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And just to short-circuit the discussion, iff that happens to be 80-90-100%, you may already be at the practical limit for your setup. If not, we can do some basic testing to see if there's a problem with the pool, but otherwise this may be in the land of FC problems that no one here has any idea about.
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
upload_2016-4-7_17-45-17.png
.
upload_2016-4-7_17-45-42.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, so, let me just put this out there, CrystalDiskMark isn't really all that great a test with ZFS. What happens if you try running two instances of it on two different VM's, at the same time?
 

M@TT

Dabbler
Joined
Mar 22, 2016
Messages
16
So run 4 instances or 2 instances total? Here is running it on two different vm's 2 instances total; however I noticed that even though I started both at the same time they seem to have completed at different times.... may need to find a better way to benchmark this if CrystalDiskMark is not the best tool here.

upload_2016-4-8_6-49-35.png

upload_2016-4-8_6-52-19.png

upload_2016-4-8_6-53-38.png


upload_2016-4-8_6-55-21.png
upload_2016-4-8_6-49-35.png upload_2016-4-8_6-52-19.png upload_2016-4-8_6-53-38.png upload_2016-4-8_6-55-21.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So run 4 instances or 2 instances total?

My patented, super-annoying "Yes" answer applies here. :smile:

Here is running it on two different vm's 2 instances total; however I noticed that even though I started both at the same time they seem to have completed at different times.... may need to find a better way to benchmark this if CrystalDiskMark is not the best tool here.

View attachment 11254

Part of the problem is that in general, ZFS and benchmarks aren't really good bedfellows. ZFS is doing a ton of stuff that isn't particularly deterministic in the sorts of ways that make for a good benchmark.

If you noticed on that last round of testing, you came very close to doubling aggregate performance merely by running two copies at a time. I bet if you run three copies, you start seeing some falloff, because the underlying disk pool is showing signs of being pretty busy. It isn't just the straightline read speeds which are important with ZFS (your drives are probably ~100MB/sec) but also the seek speeds. Since ZFS is a CoW filesystem, the more free space is available on the pool, the less fragmentation there is, and the faster writes will be. Reads are basically sped up through massive amounts of ARC and L2ARC, which you don't have in great supply on an E3 platform.

So my best guess is that if you try again with three VM's, you'll wind up with read speeds ~100-120MB/sec, write speeds marginally lower, and when you look at your disks with gstat they'll be pegged. You've probably found the limits here.

With seven 500GB vdevs, you have 3.5TB of storage. For maximum performance, don't use more than 1.5TB of it. Even at that, you're likely to notice over time that the numbers you get will drop somewhat. They'll get to a certain level and then stabilize. Look at my frequent discussions on fragmentation for help understanding this.
 
Status
Not open for further replies.
Top