So you are going with "impossible" on an E3 for a single VM, cyber? Not one in 1000 and all the experts around here can pull reasonable performance off?
I'm not an expert?
But seriously, the problem is that what *I* might be able to do with that hardware and what you are *capable* of doing in 6 months of dedicated work may not be the same thing (and almost certainly aren't the same thing). So yes, I do write off alot of "remote possibilities" because it's totally unreasonable to think that you're going to dedicate months of your time on something that could simply be solved with $1500. It's like your ex-wife wanting you dead but instead of hiring a hitman she's banking on that 1 in a trillion chance you'll happen to be killed by a falling meteorite from space. Not likely and not something you should depend on.
In any case there has to be a MAX achievable speed on a expertly configured system. At least at that point there would be a reasonable response such as... a 32Gb Haswell can get (x)MBps through put and (x) IOPs on 6 mirrored striped WD Reds, with a 300Mbps or better SLOG. There is no baseline, no reasonable means for someone to move forward.
The problem is that things like "IOPs" means absolutely *NOTHING*. Not even trying to be funny. What is an IOP? Can you define it? Hint: You can't. The problem is that the definition requires you to say "IOP of iSCSI" or "IOP of a zpool". They are NOT the same thing so you can't throw out IOPS.
You also can't throw out throughputs because it depends on factors that are going to be unique to your situation. How full is your pool? How fragmented is your pool? Are you doing kernel mode iSCSI or the current istgt? Are you doing file based or zvol.
So no, I can't throw out iops and I can't throw out a throughput. In fact, *nobody* can except for that very specific hardware and for that very specific setup.
So if you want quantifiable numbers it's easy...
A 32GB Haswell can do up to 1 billion IOPS and up to 1 billion TB/sec. Are these numbers unrealistically high? Not at all, because they mean as much to you a what userX could claim. These "benchmark" units people use like MB/sec and IOPs are all a farse because they aren't easily transferable.
This is why people that do file servers are professional file server guys. Their job is to handle all these little intricacies that I'm not about to teach and you aren't going to learn in just a weekend of reading. No joke, I had started working on a FreeNAS book. It would certainly be 500+ pages and would only begin to brush on many topics that you might have to understand deeply to be able to "optimize" a given server.
I know you've tested, and played with this. I can only conclude you determined it wasn't worth the fight? How fast is iscsi on the mini? Unbearable? I was hoping to learn something and love it when you speak up.
I didn't even try to do iSCSI. Why? Because of what I said above.... "iscsi performance is crap until you go >64GB of RAM". I didn't have 64GB of RAM and my purpose wasn't to see just how crappy a Mini is for iSCSI. Not to mention all those "intricacies" that you are possibly unaware of are crucial. Just going from istgt and file-based extent to kernel-mode iscsi and zvol is a wide range in changes in performance that range from 20% to 250% or more and depends on a bunch of "intricacies" unique to your setup. Oh yeah, and even if you choose kernel-mode iscsi and zvol (which should be the fastest) you can screw up one setting in the WebGUI and make it so incredibly slow you'd swear you had a hardware failure.
So no, when I say things like "64GB of RAM or bust" I'm serious to 99% or so. If you are idiotically stupid and plan to spend a year of your life trying to figure out how to make it work with 16GB of RAM, there's a non-zero probability you'll achieve it. Meanwhile you could have spent $1500 on hardware that would do the job. Guess which one is the logical solution for the problem? ;)
The *real* question is "what is the fastest way to achieve the desired result". Nobody wants to spend money they don't have to, and nobody wants to believe that they couldn't find all of the answers in 2-3 months. Yet there's dozens of users that have spent 4+ months before they gave up. I'm glad those people didn't work for me because it was easier to buy the new hardware than to pay some employee to do "research" for 4+ months and still end up empty-handed when a solution was required.
So take my advice, or don't. The path of least resistance is very well documented on this forum by users who have spent absurd amounts of time trying to make it work with horribly inappropriate hardware. If you think you're going to beat the odds feel free to continue down the path. Meanwhile I'll simply site the graveyard of "my iscsi sucks balls and I want speed! Please help!" threads that have existed for 2+ years and are still unsolved because the person eventually gave up.
Feel free to reacquaint yourself with slides 49 and 50 of my presentation where I specifically discuss this exact topic. And the last two bullets sum up this thread nicely...
- You can expect that the issue will not be resolved quickly by just making a few changes and a reboot.
- Most users will find that they will spend a month or more of intensive research and testing to resolve performance issues on iSCSI if you have never tuned ZFS before.
- You can expect to have very high server hardware requirements if you use a lot of iSCSI devices.
Now if you'll excuse me I will promptly unsubscribe to this thread because this issue is not something I'm open to debate about. The facts are the facts, whether you like them or not. And I discussed this topic plenty in my noobie guide so I wouldn't have to hash this out "yet again" on the forums.