Hi Visseroth,
Any RAID system is going to have a practical limit on the number of drives you can have per set. The problem gets down to how much reading and writing needs to be done from the physical disks to service each request. Sun provides the following guideline: "The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups" here:
http://www.solarisinternals.com/wik...onfiguration_Requirements_and_Recommendations
The nice thing about ZFS is that you can simply add a "vdev" (virtual device) to an existing pool. That's how you are able to make a monster ZFS pool out of 10's, 100's or even 1000's of disks. Let me step back and clear something up....FreeNAS does a good job of masking the vdev "layer" in the GUI, but when you created your 11 disk raidz you actually created an 11 disk raidz vdev & put ZFS on top of it. There's no reason you couldn't create a 6 disk raidz then add a second 6 disk raidz and add it to the pool as detailed here:
http://www.freenas.org/images/resources/freenas8.0.3/freenas8.0.3_guide.html
Look for section 6.3.4 "Adding to an Existing Volume".
Doing that basically makes your ZFS pool into something like a RAID 50 or 60 volume in the hardware raid world because you are striping your data over multiple RAID-5 or 6ish arrays.
It would be your choice whether you want to do raidz or raidz2 on each of the vdevs. Doing raidz will get you the same capacity, raidz2 will cost you 4 drives worth of capacity. Either way you have the added benefit of potentially increasing your performance-to-disk because there are 2 distinct vdevs the system can read or write to.
To simplify, if you did this you wouldn't have 2 separate pools, so you wouldn't have to worry about having to manage shares on 2 different pools. You would just have 2 vdevs underneath "the hood"...once you set your pool up if you never looked you would never know.
So lets say you decided to go with the 48 drive server case in the future. You move your board & drives over and add in some drives. You could add 6 more drives (ideally of the same size & performance class), make another vdev & add it to the pool. Want to add 12 drives? Split them into 2 more vdevs and add them. You want to have roughly the same performance characteristics among vdevs so you wouldn't want to (for example) add a mirror of SSD's to a ZFS pool made up of 3 6-disk raidz2 vdevs. ZFS will fully support you in such an attempt, but just because you can doesn't mean you should! If you think about it you can make some monstrous, ugly bastardized pools out of a mix of all manner of vdevs that will virtually assure you pain, suffering and fail, so don't get to fancy and ask here if "this is a bad idea".
Make sense so far?
Moving on to your CIFS question:
No, it means the exact opposite in fact. If you have 8 users connected in a purely CIFS environment then you will have 8 users grinding away on a single 1.6GHz core while the other 7 are sitting there running the rest of the system.
See this for more details:
http://www.samba.org/samba/docs/man/Samba-Developers-Guide/architecture.html
Look for "Multithreading and Samba".
Fortunately socket 771 Xeon processors can be had on the used market (Ebay) for next to nothing....a quick search for "771 Xeon" there produced this gem as the first result:
http://www.ebay.com/itm/1-Matched-P..._EN_Networking_Components&hash=item5d3267a971
I suspect you would spend more for proper heatsinks for faster cpus than you would on the used CPU's themselves.
The server itself does seem like a good deal, but the fact you are here asking about "how to increase system performance" says to me that it might not be the ideal solution to your problem. I'm sure that if you put another hundred\couple of hundred bucks into it for faster CPU's & heatsinks (and maybe more RAM) it should perform just fine.
-Will