SubnetMask
Contributor
- Joined
- Jul 27, 2017
- Messages
- 129
I have been playing with FreeNAS, testing it and such. In a previous thread, I voiced my feelings about the lack of redundant controllers, and that feeling still stands, but despite that, I have had some thoughts of possibly 'throwing caution to the wind' and making a move to FreeNAS. Overall, it's been very stable, performance really has been great, and it supports VMWare related features that my old Promise doesn't even know exist. Plus I'm sure FreeNAS is a bit less picky about what drives over 2TB I use - With the Promise, due to its age, I'm limited to a few SAS models over 2TB. Granted, I shouldn't need any more 4TB drives any time soon, and most of the drives are smaller 1 or 2 TB drives for the VMWare VMs, but the idea of not being cornered into a few options is nice. To be fair, one of my biggest reservations to using something like FreeNAS, which isn't 'AS purpose built' as something like an EqualLogic, Compellent, Drobo, Promise, etc (we can argue semantics all day long - yes, the FreeNAS SOFTWARE is truly purpose built, but the hardware it runs on generally isn't quite so much - although truthfully, the Compellents change that thought process), stems from way back, I had a NAS type machine that worked well for quite some time, then all of a sudden, just 'went stupid'. The floor fell out from under it performance-wise (file transfers fell to a few KB/sec at best), and NOTHING that I did fixed it. RAM, NICs, controller, OS - tried changing everything (pretty much except the motherboard), nothing fixed it. Only thing I can figure is it was something went wrong with the motherboard. That's when I switched to a Drobo and haven't used anything remotely similar to FreeNAS since, and typically shy away from it.
I'll admit, the ZFS benefits like data integrity are attractive, and that as long as I have my boot device (Sandisk USB), the drives and a way to put it all together, the data can be brought back (such as my move from the 1U Supermicro box using a Supermicro UIO HBA to a 2U 12-bay box using a LSI 9217-8i HBA).
That being said, I did some more testing today to see how it handles a disk failure. Well, I'd have to say it didn't handle it well. Not well at all. When I pulled a drive from the RAIDZ1 volume, the system totally freaked, crashed and rebooted. That's BAD. VERY, VERY BAD. In doing some searching and reading, it appears the reason it did this was probably because there was swap space in use on the drive, and apparently, swap space is non-redundant. Again, that's BAD. VERY, VERY BAD. Unless I'm missing something, essentially, this means that no matter what you do, what kind of drives you use, how you configure them, etc, a single drive failure can or will bring down the entire system. Do I need to say it again? The only possible way around that I can see to that is if you set up a small, dedicated hardware RAID and dedicate that for swap space. Or possibly configure the system with no swap space. But I can't imagine that would be all that great either.
So is there something that I missed in the setup that would prevent such an occurrence, or is that 'just the way it is'? A Single drive crashing the system is BAD. It kinda defeats the purpose of redundancy from a reliability/uptime perspective.
I'll admit, the ZFS benefits like data integrity are attractive, and that as long as I have my boot device (Sandisk USB), the drives and a way to put it all together, the data can be brought back (such as my move from the 1U Supermicro box using a Supermicro UIO HBA to a 2U 12-bay box using a LSI 9217-8i HBA).
That being said, I did some more testing today to see how it handles a disk failure. Well, I'd have to say it didn't handle it well. Not well at all. When I pulled a drive from the RAIDZ1 volume, the system totally freaked, crashed and rebooted. That's BAD. VERY, VERY BAD. In doing some searching and reading, it appears the reason it did this was probably because there was swap space in use on the drive, and apparently, swap space is non-redundant. Again, that's BAD. VERY, VERY BAD. Unless I'm missing something, essentially, this means that no matter what you do, what kind of drives you use, how you configure them, etc, a single drive failure can or will bring down the entire system. Do I need to say it again? The only possible way around that I can see to that is if you set up a small, dedicated hardware RAID and dedicate that for swap space. Or possibly configure the system with no swap space. But I can't imagine that would be all that great either.
So is there something that I missed in the setup that would prevent such an occurrence, or is that 'just the way it is'? A Single drive crashing the system is BAD. It kinda defeats the purpose of redundancy from a reliability/uptime perspective.