Timothy Montoya
Cadet
- Joined
- Nov 2, 2016
- Messages
- 3
Hi all!
I work at a small production house based in California and we have a fairly sizable FreeNas server setup for everyone to work off of. I'm a relatively new hire, and I was recently given all of the login info to take a look at it because there has been no on maintaining it for the last several months or so. I've never dabbled in FreeNas, but I have spent a fair amount of time setting up/maintaining Ubuntu web/email/DNS servers, and I have a fairly decent knowledge of storage setups/RAID/formatting.
So after I was given all of the info, I poked around to see what the status of everything was, and I was greeted by a nice red flashing warning light for a status. I'll post all of the relevant information below, but my main question is, in it's current state, how safe is this array/setup? Is there any fault tolerance left, or if a drive fails are we out the whole pool? I've already recommended backing the whole thing up, wiping it, testing all of the drives, and get it back into a known good working state, but how mission critical is it at this point?
The chassis is a 24-drive setup, each bay containing a WD 4TB Enterprise drive. I was told that it was originally setup with 3 RAIDZ1, pooled together, with the remaining 3 drives added as hot spares. I first logged in to find it resilvering (not entirely sure why) RAIDZ1-2, and it having 9 drives now instead of the original 7. There are two drives not in use/showing. Drive 9 shows up, but isn't in use, and is available to be added to a pool, and then drive 12 is missing entirely. I'm not sure if it's dead, but I cannot find it anywhere. I'm afraid to pull it because I don't want to risk anything.
Below are screenshots of everything that I believe is relevant to our setup
When "zpool status -v" is run this is the output:
I work at a small production house based in California and we have a fairly sizable FreeNas server setup for everyone to work off of. I'm a relatively new hire, and I was recently given all of the login info to take a look at it because there has been no on maintaining it for the last several months or so. I've never dabbled in FreeNas, but I have spent a fair amount of time setting up/maintaining Ubuntu web/email/DNS servers, and I have a fairly decent knowledge of storage setups/RAID/formatting.
So after I was given all of the info, I poked around to see what the status of everything was, and I was greeted by a nice red flashing warning light for a status. I'll post all of the relevant information below, but my main question is, in it's current state, how safe is this array/setup? Is there any fault tolerance left, or if a drive fails are we out the whole pool? I've already recommended backing the whole thing up, wiping it, testing all of the drives, and get it back into a known good working state, but how mission critical is it at this point?
The chassis is a 24-drive setup, each bay containing a WD 4TB Enterprise drive. I was told that it was originally setup with 3 RAIDZ1, pooled together, with the remaining 3 drives added as hot spares. I first logged in to find it resilvering (not entirely sure why) RAIDZ1-2, and it having 9 drives now instead of the original 7. There are two drives not in use/showing. Drive 9 shows up, but isn't in use, and is available to be added to a pool, and then drive 12 is missing entirely. I'm not sure if it's dead, but I cannot find it anywhere. I'm afraid to pull it because I don't want to risk anything.
Below are screenshots of everything that I believe is relevant to our setup
When "zpool status -v" is run this is the output: