1. You cannot remove redundancy disks from the math equation. Whether you have a 5x4TB in a RAID0, RAIDZ1, or RAIDZ3, you need he same amount of RAM per the thumbrule(20GB.. I'll ignore the conversion from a binary TB to a decimal TB for now). The parity data is cached also because if the actual data is corrupted you'll want the parity data in RAM also. Keep in mind that the thumbrule is not a 100% certainty. Some needs much much more, and some need less. Some don't care about performance, but people like you and I do.
Reading the best practices I had no idea whether or not to leave or work with the redundancy when planning RAM requirements, just something else I've learned here.
2. You're really splitting hairs trying to save approximately 7% of your RAM in the math. It doesn't matter what the math really says, it only matters how much gives you good performance. For me, with my 18x2TB RAIDZ3 and my typical use, I needed to go to 20GB of RAM. 12GB of RAM was horrible, 16GB of RAM was better(but not perfect) and 20GB of RAM worked for me. I also expect that if I add more disk space later I'd better buy another 8GB stick of RAM. Trying to argue that 7% seems like a waste of time since you can't argue with your computer that you have enough RAM. I also know that when friends visit and we use my server alot performance takes a big dump. So if I ever plan to have a long term roommate I'd better get more RAM.
I wasn't trying to defend only having 12GB of RAM in this server, rather understand the process of deciding how much RAM one needs/should start off with and work from there. Obviously there is no straight answer but there is always recommendations for starting points.
4. I understand your anger towards me. And believe me you aren't the first to get upset with the answer of "add more RAM" and I'm sure you aren't the las. People don't like being told to spend money, especially when you're also convinced that "I must have enough". But I have just a little experience with ZFS(looks at his 3000+ posts) and it seems logical that more RAM would help. And I knew what your response would be because every person that gets told to upgrade RAM hates the answer. I fought with my own server for almost 2 months before I gave up and bought more RAM($70 for my stick). I'm a disabled vet, and money is tight right now thanks to the VA taking 18 months to process the average disabled veteran's paperwork. What really sucks is those suckers that built a FreeNAS server with a very low RAM limit and are already at the limit and they need more. This is why I have more RAM mentioned numerous times in my presentation. Everyone tries to argue it, but I've seen so many people argue with me, some PM me to tell me to F-Off, etc. that quite often when I see someone that has an absurdly low amount of RAM I don't even bother to respond. If they couldn't figure out their problem via the manual, my guide, and searching the forum, why should I expect they'll listen to me? They've already drowned out the answer and who likes being told what is already so plainly obvious in many places? I've seen so many threads with not enough RAM that it isn't funny.
This is where I get the most frustrated. I am here to learn first and build a good server second. The fact that I need more RAM doesn't bother me a bit. If I didn't want to spend the money, I wouldn't have ventured into a project like this. 70% of this project is educational for me and the other 30% is practical. The most likely case is that I will buy an entirely new setup when I can prove to myself I can build one of these machine, though it will probably be with a company credit card this time. Of course I want this project to have great performance because then when I'm building my next one, I'm more knowledgeable on the subject and I don't have to keep sounding like an idiot to guys like you on the internet.
5. CIFS, NFS, FTP and iSCSI don't work the same. CIFS doesn't generally see a big performance inprovemen with a ZIL. NFS and iSCSI can and often do. The whole reason why there are different protocols is that people had different ideas about what is the most important and how it works internally. Additionally, CIFS is single threaded and NFS isn't. There is no built-in support for NFS if you are using a home version of Windows. Each has pluses and minuses, and you have to weigh them. iSCSI just does "dumb reads and writes" and has no idea what a file system is or even what a file is. It just reads or writes the appropriate sectors and sends the data to the machine that needs it. So not shockingly, they don't perform the same.
Both your and jgreco's explanations of ZIL have led me to rethink my purchase of my SSD's. I am reviewing certain synchronous methods that may interest me but more than likely I will end up removing them and using them for something else.
5. I don't close topics as a matter of course. You hate my answer, fine. You want to politely tell me to screw off, fine. If you want to send me a PM and tell me how much you hate me, that's fine too. But trying to follow draconian rules where I shut people up that disagree with me doesn't do either of us any good. You may be the one with the problem, but you definitely won't learn anything if you had posted and I responded with "another idiot.. RTFM" and closed the thread. There's a reason why the US constitution includes the Freedom of Speech and why I chose to defend it. It would be a little idiotic to then deliberately silence someone else.
Good day to you. I do hope you get your performance issue figured out.
And so, thanks to the help of you guys, a few pointers and some more reading I believe I've traced down my issue and resolved it, for now at least. I spent a lot of time last night doing performance tests on the box itself, getting write speeds of around 530Mb/s, forget my read speeds but they weren't that bad. Locally I was doing tests with files 25GB large to avoid caching. I started doing network tests between my best two test boxes and my NAS server and found a very big issue. Most of the time, iperf would cap around 250Mb/s over the '1Gbe', using a 64k window sized helped a little to kick it to around 400, maybe 500 but nothing close to what it should be. This morning (got into work early for this), I moved all three machines to a new rack, didn't care about the external NICs so I stuck them in whatever switch was available, but I put a brand new managed switch between the boxes with new cable on the storage NICs and did more network testing. I'm now receiving about 930Mb/s over each line. I am finally moving the data I want on this array over to one of my CIFS shares. About 4.32TB is transferring at a steady 110-114MB/s.
The key here was a point that jgreco brought up, I had already known but didn't think anything of it and thought maybe I was bursting on the switch up to 2Gb/s. But the write speed for iSCSI was incredibly too high and never went to regular 1Gbe speeds. Turns out it was caching on the client side and slowly transferring the files over to the server. Watching the network graphs (on my old switch) while copying files, and after they were finished the storage NIC would remain around 40-50% used and the iostat of the pool showed exactly what I thought, slowly the files were writing to the disks.
I'd like to get this data onto the array and start actively using it so I can measure how well it holds up under load, however I've already ordered my RAM from Newegg and I'm picking it up from will call today after work. So thank you guys for your help, I'm sorry it was just a networking issue from the start and I should have been doing the proper tests from the beginning, but hey it's all in the learning process.