Hi matthewowen01,
Tx! for the input, I'll reply to the comments for other people's sake only, but please note that this issue has now become academic for me since my problem is solved. I also won't re-quote your comments since it will get too confusing. Anyhoo, on with the task:
1 - Regarding zpool import switches: I have tried all of them in any combination you may think of, according to the zpool manual, the "official" Solaris manual, the "official" BSD manual and some other manuals and, let me be absolutely clear on this point, *none* of them worked as expected. I tried them in FreeNAS 8.0 and FreeBSD 8.2. The *only* switch that actually worked was the -X switch that is currently unavailable in the above mentioned OS's.
2 - The -X switch is an un-documented switch. It is not in the manuals. Anywhere in any official manual. I have a deep issue with an FS where the *only* tool that works is an un-documented and un-tested one. And for that matter, it does not even work well. Furthermore, if you check postings all over the net, you will start to see a pattern. And the pattern is always the same. When zpool import fails, people try -X. From what I have seen, it has a 50/50 chance of success. If it does not succeed, then you have to roll back manually. Again, check the posts on how to roll back manually. Or better, don't bother, let's just say that in order to do that, you will need to relinquish remote control of your system to a ZFS geek, it's the only way. So, am I impressed? Most definitively no. It is not acceptable for such a sophisticated FS on its 15th! version not to have good, documented and automated tools for rolling back. It is simply something that one does not do. Not for a FS that is supposed to be "mainframe quality" or "datacenter quality".
3 - Low level corruption. Again, check the posts all over the net. It boils down to this, ZFS is incapable of detecting silent corruption, even though it was supposed to have been build to handle this. Not only can't do it, but is quite difficult to troubleshoot systems with ZFS and such issues. And I am not talking low end hardware, I am talking high end hardware. Now, some people say that ZFS is simply exposing underlying hardware issues that other FS's don't see. Fair enough, but then why it can only detect this kind of corruption on a scrub? I mean, isn't ZFS supposed to be scrubbing all the time on the fly? And how is it possible that this kind of corruption does not happen with other high end FS? I would like to point out that although I am not an IT geek person, I have been around high-end IT systems for over 20 years and I can assure you that there are more ZFS-related problems out there than there are UFS, for example. Then why is the industry using ZFS? Simply because of storage space management capability. Not because of of inherent data safety issues. That's entirely covered by other layers and backups. To my knowledge there are no large IT departments or divisions that use naked ZFS systems for storage. There are always other layers, ZFS is being used *only* as container provider. Again, I am not impressed.
4 - RAM, again, check posts all over the net. Same pattern appears. Low RAM is OK as long as you are not pushing the system. Under such conditions, ZFS bearly uses RAM. However, push the system to a high level and ZFS becomes a RAM hog. Check for data-migration related posts, not daily utilization. You will be surprised of what you will find. In real life (not the theory of what the manual says), you need plenty of RAM if you want your data to be safe. However, the more RAM you have the higher the chance of silent corruption. On a side note, it goes without saying that yes! the more performance you want the more RAM you need. This is not the issue.
5 - Ease of use. Sorry but theoretically speaking, yes it was. I'll refer you to the horse's mouth: the very Sun announcement of ZFS. Check
http://web.archive.org/web/20060428092023/http://www.sun.com/2004-0914/feature/ where it stresses "Simple administration
ZFS automates and consolidates complicated storage administration concepts, reducing administrative overhead by 80 percent. " So much for "simple".
6 - Raidz1 and redundancy. There is plenty of redundancy in a standard Zpool without going to a raidz1. I am not talking only about data redundancy (which in a standard zpool isn't) but about indexing and checksuming. If we throw raid1 on top of that, it makes for quite a bit of redundancy to check and double check. ZFS is not doing that (see silent corruption above). ZFS was supposed to be "self-healing" (again, check Sun's announcement). It is not. The -F option does quite little in reality (just Google it) and why did we had to wait until version 15!!!! to have such a switch?. And you still can't scrub it unless imported and you can't import it because it is not scrubbed. Talking about the chiken and the egg! Heck! A lousy NTFS can be cheked and salvaged off-line *even when both tables are corrupted*. ZFS cannot. No, I am definitively not impressed.
7 - Disk additions. Yes, you can expand storage by replacing disks with a higher capacity ones. And yes, you can add top vdevs. However, you can't add disks to any type of raidz since the block pointer rewrite functionality is not "yet" implemented. Yes, I know that people considers ZFS a "mainframe animal" however, I'll point you back to the horse's mouth, where Sun stated: "ZFS meets the needs of a file system for everything from desktops to data centers. " Sure, the statement is true, as long as you don't have any need that is not covered in ZFS. It's like Ford said: People can have a Ford-T model in any color they want, as long as it is black. So, no, again, I am not impressed.
8 - Wrt the 4K hdd issues. Again, sure. Such hdd's do not advertise it. However, the Linux community has come across similar issues many times and have always found ways around it, by hard coding, testing or discovering. Basically it is just Oracle's lazyness that prevents ZFS from discovering and acting upon 4k hdds. And this is not a trivial issue because you have to manually create a zpool with an ashift of 12 in order to get better performance, if not, your performance will get hit by as much as 50%! Now, manual creation of such pools is not trivial, nor well documented, nor understood nor well tested nor anything close to a user-friendly set of instructions. And the same goes for FreeNAS (7 or 8) and FreeBSD and any other OS that uses ZFS to my knowledge. So, no, I am most definitively not impressed.
9 - RAM self-tunning. Yes, once it has set parameters, it tunes itself marvelously. However, and here is the pickle, *without* initial parameters it does bupkus well. Proof? Sure, just start using ZFS undera any OS (except Solaris) and you will find out quite quickly that you get the standard ZFS message saying that it's ussing less than 512 Mb of RAM and you should expect instability. I am talking about parameters such as:
#ZFS kernel tune
vm.kmem_size="512M"
vfs.zfs.arc_min="128M"
vfs.zfs.arc_max="128M"
vfs.zfs.prefetch_disable="0"
vfs.zfs.zil_disable="0"
zfs.zfs.txg.timout="30"
vfs.zfs.vdev.max_pending="35"
vfs.zfs.vdev.min_pending="4"
Can you imagine the user's cry if the evil, evil, MS would have relesed its Windoze Datacentre version *without* a RAM tunning algo? Again, not impressed.
10 - Sil 3114. It works for you in Solaris? Good for you! You are lucky! However, there is NO official support in Express 11 for such cards. Period!. In order to try to get it to work I had to manually load the Si3124 driver which has been reported that "may" work with a Sil3114. Well, it does not. At least not with my card. Sure, the driver loads and sure, the PCI card is visible in Express, however, no hdd is visible nor accessible. Again, not impressed, and I like Solaris.
11 - Openfiler, yes, I looked into it. It's OK, but I discarded it since, precisely, it does not have ZFS. Originally, looking at ZFS, it looked *so* superior that how could I not try it out!? In retrospect, it was a grave mistake. Now it's too late. I am in. I simply don't have the budget to buy a brand new array of hdd's to re-migrate about 5 Tb of data from a raidz1 into a plain raid5. Is ZFS oversold in its capacity? In my view, absolutely yes! It is not user friendly, it is finiky and most definitively un-forgiving. This is not to say that people may not get lucky, particularly if you can afford good quality hardware. However, for the daily enthusiast, the diy type of person, I would definitively discourage its use.
12 - Lastly, I am not saying tha ZFS is not a wonderful idea. All I am saying is that it should be realistically treated for what it is: a beta sofware still heavily under development.
Anyhoo, my 2c.
Again, I would like to reiterate that despite our differences in opinion, I certainly value very much indeed your willingness to help a fellow ZFS'er in need.
Tx! again!.
Again, I would like to reiterate that despite our differences in opinion, I certainly value very much indeed your willingness to help a fellow ZFS'er in need.
Tx! again!.