A few questions from a newb

Status
Not open for further replies.

draggy88

Dabbler
Joined
Jun 19, 2015
Messages
19
hi. Ive been tingering around with my new Freenas 9.3 box and i have a few areas im a little unsure about.
HW:
X10SLM-F in a
SC836E16-R500B chassi
AOX-S2308L controller (LSI)
E3-1220v3 cpu
32GB ECC RAM
16GB SATA DOM for OS
16xST6000NM0004 Enterprise Nearline SATA


Network interfaces:
1. The motherboard has a dual nic, why does freenas chose 2 different drivers?
igb0 1xx.yy.3.137/24
em0 1xx.yy.3.127/24
2. Under interfaces, i have 0 entries, but both have gotten a dhcp address.
Where can i change my 2 default NICs, set static address, netmask etc?
If i add a nic, i just get a new nic/ip with the same mac-address

Storage:
1. I got 16 disk divived into 2 vdevs -> Volume at around 86TB.
It automatically made a dataset of the whole space and a new subdir called jail. Where did the jail come from? I see it also as a /v1/v1/jails directory which is empty
2. Under jails i have no entries... but i still got the jail directory??


Sharing:
1. Ive got some samba experience from RHEL over the years so i assumed cifs sharing would be easy, and the first share came up straight, no problem.
2. Made a user STORAGE, group and password. Used the default dataset and it made a storage directory, added it to windows, np!!
3. Doing this again with a new user, group, password, no way in hell it will work. Everything looks the same, but no way i can add it to windows, wrong password/user etc.. i even went in to the console and chowned the directory to the new user test.. no avail..

All in all, freenas works great and the performance is awesome on this hw!!

br Thor



 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Networking:
Read the documentation and do not use both nics on the same subnet
 
D

dlavigne

Guest
1. The motherboard has a dual nic, why does freenas chose 2 different drivers?
igb0 1xx.yy.3.137/24
em0 1xx.yy.3.127/24

FreeBSD networking is different from Linux networking. You can get more information about each driver and its capabilities by reading its man page:

https://www.freebsd.org/cgi/man.cgi?query=igb
https://www.freebsd.org/cgi/man.cgi?query=em

1. I got 16 disk divived into 2 vdevs -> Volume at around 86TB.
It automatically made a dataset of the whole space and a new subdir called jail. Where did the jail come from? I see it also as a /v1/v1/jails directory which is empty
2. Under jails i have no entries... but i still got the jail directory??

This dataset is automagically created for you and is used by Plugins.

3. Doing this again with a new user, group, password, no way in hell it will work. Everything looks the same, but no way i can add it to windows, wrong password/user etc.. i even went in to the console and chowned the directory to the new user test.. no avail..

NEVER chown/chmod perms in a CIFS share as this clobbers the ACLs. All CIFS perms need to be fine-tuned from the Windows client. In other words, use the Wizard to create the share using the instructions in http://doc.freenas.org/9.3/freenas_sharing.html#windows-cifs-shares and make any perm modifications from the client. Note that Windows clients cache credentials so you may have to use the net delete command as described in http://doc.freenas.org/9.3/freenas_services.html#troubleshooting-cifs.
 

draggy88

Dabbler
Joined
Jun 19, 2015
Messages
19
thx for the response ;)

Since i got 16 disks (4k), would a RaidZ3@11 disks + RaidZ1@5 be the 2 best Vdisks to put in a volume for a max-storage efficiancy point of view ? I seem to get around 62.8TiB out of the 86.5TiB volume.. I guess its due to the non-alignment issue with the 4k sector size? Or am i mistaken?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Combining RAIDZ3 and RAIDZ1 in a pool isn't a good idea, as it leaves the entire pool vulnerable to the RAIDZ1 vdev dying. Why not do two 8-disk RAIDZ2 vdevs?
 

draggy88

Dabbler
Joined
Jun 19, 2015
Messages
19
Well, i see your point, but i guess its a larger chance that a 11 drive set would fail than a 5 drive set and was the suggestion from the UI.
From all the help guides everone was reccomending 5,6,11,19 etc.. setup for my hw config..
From a traditional RAID config a Raid6 would give (16-2)x6TB = 84TB. - overhead..
In freenas that sums out to (16-3-1)x6TB = 72TB, tho the volume says 81TB strangely enough and i get 64TB net... so call me confused ;)
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
16 and 19 disks are too wide for a vdev. should a disk fail, the resilvering process would probably take days.

as danb said, 2 x 8 disk RAIDz2 vdevs is a good choice. we don't recommend RAIDz1 for disks larger than 1-2GB.

do you have any plans to use iSCSI? If so, for maximum performance, striped mirrors would be a better option to use.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Raid Z3 + RaidZ1 will yield the same usable space as 2xRaidZ2's but with greater probability of failure and less performance (due to the smaller drive array).

The reason you are seeing such a discrepancy is likely due to the marketing of TB as a base 10 number when it is really base 2. See Tebibyte. https://en.m.wikipedia.org/wiki/Tebibyte

6TB is really 5.45

Sent from my Nexus 10 using Tapatalk
 

draggy88

Dabbler
Joined
Jun 19, 2015
Messages
19
hi. I remade the volume as 2x VDEV's at 8x1x6.0TB = 65.4TiB total presented in the UI.
But under storage afterwards it says:
Volume : FOO : Available 87.0TiB
Dataset : FOO2 : Available 59.9TiB

[root@storage ~]# zpool status -v BIGMAMA
pool: BIGMAMA
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
BIGMAMA ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/56a27492-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/56f50a75-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/57430e86-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/5793b476-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/57e12fc9-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/582fc08a-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/5882fec1-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/58d4a807-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/5929556a-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/5978e33b-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/59c73bcb-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/5a142c7d-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/5a68196c-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/5abe0805-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/5b139fb2-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0
gptid/5b6a33b3-20f4-11e5-8ee5-0cc47a4cd25d ONLINE 0 0 0

Any idea why im not getting 65.4TiB ?
 
Status
Not open for further replies.
Top