First configuration - Everything works and now?

Status
Not open for further replies.

abe_one

Explorer
Joined
Nov 11, 2015
Messages
70
Hello everyone, I am a user to his first time with freenas. I just finished assembling a card with nas asrock xeon + + 16gb of ram. I used as storage 6x 4tb wd red and I put everything in raidz.
The major types of security suits me well because I use a good ups that protects me hard and I have a backup of sensitive data on the external USB I do manually.

I have a lot of confusion about what I do now:
I did the users and shares (2 cifs data and media) and a time machine for afp. I chose cifs because the software on Apple TV no longer supports afp and then I did some tests and also by clients mac I have no problems with the cifs and I left everything well.

I set off with the ups and communication via e-mail.
I created the virtual interface for link aggregation and verified.
I saved the configuration of the system to an external drive.

A rule would be okay, I do not need anything else.
There are other procedures to do?
I have read too many things about the snapshots that now I do not understand whether they are useful or not. I do not care to restore your system to a previous version, I only require that everything is safe and reliable.

The other question is directed to the display of client mac.
Startup see how network resources the nos. The finder I (cmd + K to connect to a server), and use "smb: //home.local" credentials and after I see my two datasets shared and I can access it and work with it. The problem is that on the sidebar of the Finder, I see not just connected a second icon that does not call "Home" but "home.local" and in both I can get in there and work .... both have the same network path ("smb: //home.local "). Do you think why?

Thanks for your time, and sorry my english google!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I see not just connected a second icon that does not call "Home" but "home.local"
Seems to me that the "Domain" entry (Under [Network] - [Global Configuration]) is still set to "local". "home" is the Hostname and ".local" is the Domain, they are the same; one is just the fully qualified name.

BTW, have you considered doing a "burn in" of your system before putting it into use? See the links in my sig...
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
Sounds like you have a nice system there, although I am not familiar with the motherboard/processor stuff. It seems you have done your homework. You raise a lot of issues.

As I'm sure you know, Raidz means you can only lose 1 drive; if a second one goes, you lose all your data. If you have a good backup system, maybe that's OK, but a lot of people here would recommend you use raidz2 instead.

Yes, snapshots are hard to fully understand. I've spent hours reading about them and am still fuzzy. But I know they can be useful if you accidentally delete or mess up a file. They don't take up much space, and I suggest setting up a snapshot schedule.

I don't quite understand what you're saying about what you see on your mac. Perhaps a screenshot would help. I use AFP and it generally works fine.

Edit - oh, two other things you should do is set up smart tests and scrub schedule in the GUI.
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
As I'm sure you know, Raidz means you can only lose 1 drive; if a second one goes, you lose all your data. If you have a good backup system, maybe that's OK, but a lot of people here would recommend you use raidz2 instead.

I used as storage 6x 4tb wd red and I put everything in raidz.

I'm guessing that he actually is running RaidZ2, especially if he just ran through the Wizard that pops up at first logon. Of course, I may be wrong too...
 

abe_one

Explorer
Joined
Nov 11, 2015
Messages
70
Hello everyone and thank you for the answers!

then: 6 discs are raidz1 although I do not understand why the space available, in theory 6 discs 4 tb in raidz1 (raid 5) should give 20TB of space, however the volume created although state space correct his only partition declares less space.
Schermata 2016-01-29 alle 22.37.17.png Schermata 2016-01-29 alle 22.37.31.png
find attached a screenshot of what happens to me in the finder.
when I connect to the server with cmd + k and I type smb: //home.local I enter the credentials and asking me some volume (dataset in my case) I want to access.
When I access to any I find myself in this situation. In both folders I can navigate write and read, and both have the network path smb: // etc. etc.
Schermata 2016-01-29 alle 22.35.58.png

how do I fix?


for the rest scrubs is already active on his own, I have to set another?
for testing smart now I research how to make and provider and.

thanks for help !!!!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

abe_one

Explorer
Joined
Nov 11, 2015
Messages
70
I chose raid 1 because the data have two backup external, while for films that are currently approximately 6TB, I think it is useless to spend to have a back up, or at least now I can not afford. Until last week were on 2 discs 3tb in raid0 ..... so I am already one step ahead!
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Yes, snapshots are hard to fully understand ... They don't take up much space
A snapshot takes up essentially no space when it's taken. It only begins to take up space when subsequent changes are made to the dataset, since it retains a 'snapshot' of how the dataset used to look. The easiest way to think of it is that usually, when you make changes to a dataset, the storage that was previously used becomes available for reuse after the changes are written. When a snapshot exists and subsequently changes are made to the dataset, none of the storage that was in use at the time the snapshot was created becomes available for reuse until the snapshot is deleted. So the storage utilization becomes "total in use at time of snapshot" plus "total of all subsequent changes".
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Unfortunately, it's not well understood by many users.

For background information, you might want to read some of the linked files in: https://bugs.freenas.org/issues/12968

If you use @Bidule0hm fine calculator, https://jsfiddle.net/Biduleohm/paq5u7z5/1/embedded/result/ you'll see that the numbers you are seeing are correct. For more information about his calculator, see: https://forums.freenas.org/index.php?threads/zfs-raid-size-and-reliability-calculator.28191/

... 6 discs are raidz1 although I do not understand why the space available, in theory 6 discs 4 tb in raidz1 (raid 5) should give 20TB of space, however the volume created although state space correct his only partition declares less space.
 

abe_one

Explorer
Joined
Nov 11, 2015
Messages
70
Then the problem solved. It was an option the CIFS service that is enabled by default but that I think is not good.
I'm talking about the option "ZeroConf" that allows clients to see in mac osx finder SMB network resources in the same manner in which they are seen by the computer windows. Disabling it I got what I wanted, or in the Finder now there is only one voice that is nos, and when I connect to the server with cmd + k always see only a network resource !!!
It is not yet clear why this network I appear with the name: "home.local" and not simply just "Home" as settings where "Home" is the host name and "local" is the domain, which is not It should appear.
In addition to pure test I tried using the cmd + k to connect to the "smb: // IP of my nas"
and network resources appeared the server icon and his name was IP address.
Do you have ideas of why they conporti so?
However I am happy to have solved the problem of duplicates that I was a lot on the boxes.
Attach screen configurations hostname
thank you

Schermata 2016-01-30 alle 11.22.28.png Schermata 2016-01-30 alle 11.22.38.png
 

abe_one

Explorer
Joined
Nov 11, 2015
Messages
70
@gpsguy
I read of the problem but that is not quite right because the volume created is about 21tb as it should be, but his only dataset in which I created my dataset overs different values.
I read something about the percentage of use of the raid. There was talk of using only 80% into the array to avoid complications, the achievement of which add more space, I do not understand if you got to do something or not. That can be hidden in dafualt space to prevent dataset sforino? if the volume size is correct, but that of the dataset is less than it could be.
Or you think it is just a bug freenas?

This morning I asked nas updates, I made them to see if the problem is solved, but nothing has changed (the hope is the last to die), the problem is that missing several TB on which I relied for the future.
attach screen


thank you

Schermata 2016-01-30 alle 11.29.05.png
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
The "bug" that was filed, was an attempt to get the situation documented better. If it had been done, we'd be able to point users, like yourself to the documentation, so you'd understand that the numbers are what they are.

Earlier in the thread, you said "in theory 6 discs 4 tb in raidz1 (raid 5) should give 20TB of space".

The first big problem you run into, is that the drive makers quote TB, whereas the OS (FreeNAS and Windows) use TiB. Your 4TB drive, is about 3.6TiB in size. FreeNAS also allocates 2GB of space per drive for swap space, so you lose a little there. And, then there is other overhead for FreeNAS. With 10.7 free and 6.1 used, you had about 16.8TiB to start with.

Once you pool gets to 80% full, you will receive a warning. This gives you time to decide what to do next. Cleanup (delete) your existing data or add additional storage. Whatever you do, you don't want to let it go past 90% or reach 100%. Recovering from a disk full situation is not intuitive and it's a slow process.

Near the end of the ZFS Primer (http://doc.freenas.org/9.3/freenas_intro.html), it says -

"At 90% capacity, ZFS switches from performance- to space-based optimization, which has massive performance implications. For maximum write performance and to prevent problems with drive replacement, add more capacity before a pool reaches 80%."

This morning I asked nas updates, I made them to see if the problem is solved, but nothing has changed (the hope is the last to die), the problem is that missing several TB on which I relied for the future.
 
Status
Not open for further replies.
Top