New NAS (Recommendations)

Status
Not open for further replies.

michael230

Cadet
Joined
Sep 3, 2012
Messages
8
Hey guys is it normal to get da0s1: geometry does not match label (16h,63s != 255h,63s). on FreeNAS-8.2.0-RELEASE-p1-x64 (r11950)?

Everything boots fine and no alerts on the GUI.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
Seems it's "normal" to get this warning when booting from flash drives.
 

michael230

Cadet
Joined
Sep 3, 2012
Messages
8
Thanks Stephens for the reply, I was wondering did I make a mistake creating only 1 vdev to my zpool? The reason I asked this is that if my vdev fails then I will not be able to recover. I do have 2 Controller cards and if one of my controller fails is it possible that it will impact my vdev integrity? also what are the chances of my vdev failing ?

# zpool status -v
pool: noah
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
noah ONLINE 0 0 0
raidz2 ONLINE 0 0 0
gptid/e76b5106-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/e8151989-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/e8bb496f-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/e9642aad-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/ea0c24fe-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/eab88105-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/eb6061f9-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/ec0c3fc2-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/ecb22617-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/ed57aea6-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/edf337d8-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0
gptid/ee96552b-f707-11e1-9647-5404a6ed8a8e ONLINE 0 0 0

errors: No known data errors

----------------

#zfs list
NAME USED AVAIL REFER MOUNTPOINT
noah 3.09T 13.2T 3.09T /mnt/noah
----------------
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If any vdev fails, you lose everything in the zpool. Read the guide in my signature.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Thanks Stephens for the reply, I was wondering did I make a mistake creating only 1 vdev to my zpool?
With raidz2 I wouldn't go with any more than 10 disks. Best practices are to keep vdevs under 9 disks. Given that you have 12, I suggest you destroy the pool and create a new pool consisting of 2 x raidz2 vdevs of 6 x disks each.
 

michael230

Cadet
Joined
Sep 3, 2012
Messages
8
Hi Noobsauce, I read your documentation and this is why I made the previous comment and BTW thanks again for writting it.

Below is my config:
Motherboard -> 4 Drives x 2TB
Controller 1 -> 4 Drives x 2TB
Controller 2 -> 4 Drives x 2TB


I wanted to have at least 16TB of usable data and wondering if I should go and create (1 VDEV x 6) x 2 to 1 ZPOOL but as per your document if any vdev fails you loose your data. This means I would need to have 24 drives to be completely safe or is there any way to repair the vdev.


This is how I was thinking of configuring the system

6 ---> VDEV 1 ---> ZPOOL1
6 ---> VDEV 2 ---> ZPOOL1

Questions:
1. Now should I take under consideration the Controller's example add VDEV for Controller 1 (4 HD's) + 2 HD's from Motherboard and VDEV2 for Controller 2 (4 HD's) + 2 HD's From Motherboard or does it really matter?

2. Will the ZPOOL1 be transparent meaning it's 1 huge (Volume of usable data) or will it be split into 2?

3. Also what are the chances of a vdev failing or a better question is how can I avert my vdev failing?

4. Will this new setup be able to repair each others vdev or am I out of luck on recovering the data from that vdev?


Another question... Should I just upgrade my FreeNAS to 8.3.0 and add the new zfs revision?

Sorry that I'm asking so many questions as I just want to ensure that I have the appropriate setup before reconstructing my zpool again and because it's such a huge amount of data I want to ensure I have a permanent solution for my Data storage.

Thanks again guys and I'm really sorry about all these questions.
 
Status
Not open for further replies.
Top