trininox
Cadet
- Joined
- May 17, 2013
- Messages
- 2
Greetings,
I'm testing FreeNAS as a replacement for several NAS servers we have that previously ran another product (Open-E).
The hardware will be intact except an upgrade to the memory. However a test has not gone as expected.
Hardware,
SuperMicro X7DBN
Dual Xeon 5130 @2Ghz
2GB RAM - 4x512mb FB-DIMM DDR2 (currently) (16GB is on order)
3ware 9650SE-16ML
FreeNAS-8.3.1-RELEASE
I have 16 500GB Drives connected,
I created a pool of 5 data, 1 log, 1 cache, 1 spare.
I extended that pool with 5 data, 1 log, 1 cache, 1 spare.
I connected to Active Directory.
I setup a CIFS share or two.
So far great,
Lets try causing trouble.
I pulled drive #16 (da15) to see what would happen, nothing.
I couldn't get it to acknowledge a drive failure/missing.
I could probe using the shell commands and see it was missing, smartctl for example.
It still listed the drive online, the pool healthy, I could still change settings for the drive.
I did a reboot, and sure enough the drive was found missing and a spare was put into action, I think.
One spare>stripe is da7p2 the other is a string of numbers.
I've been searching around the forum and the web but nothing recent,
I did find this from a previous version, 2011. Is this still and issue???
Removed-disk-does-not-cause-ZFS-to-degrade-without-restart
Solution? I saw somewhere scrubs may aid in detecting the fault, but I also say that can be counterproductive to performance if they are ran often.
I'm testing FreeNAS as a replacement for several NAS servers we have that previously ran another product (Open-E).
The hardware will be intact except an upgrade to the memory. However a test has not gone as expected.
Hardware,
SuperMicro X7DBN
Dual Xeon 5130 @2Ghz
2GB RAM - 4x512mb FB-DIMM DDR2 (currently) (16GB is on order)
3ware 9650SE-16ML
FreeNAS-8.3.1-RELEASE
I have 16 500GB Drives connected,
I created a pool of 5 data, 1 log, 1 cache, 1 spare.
I extended that pool with 5 data, 1 log, 1 cache, 1 spare.
I connected to Active Directory.
I setup a CIFS share or two.
So far great,
Lets try causing trouble.
I pulled drive #16 (da15) to see what would happen, nothing.
I couldn't get it to acknowledge a drive failure/missing.
I could probe using the shell commands and see it was missing, smartctl for example.
It still listed the drive online, the pool healthy, I could still change settings for the drive.
I did a reboot, and sure enough the drive was found missing and a spare was put into action, I think.
One spare>stripe is da7p2 the other is a string of numbers.
I've been searching around the forum and the web but nothing recent,
I did find this from a previous version, 2011. Is this still and issue???
Removed-disk-does-not-cause-ZFS-to-degrade-without-restart
Solution? I saw somewhere scrubs may aid in detecting the fault, but I also say that can be counterproductive to performance if they are ran often.