For the first time, I'm actually using FreeNAS

Status
Not open for further replies.
J

jpaetzel

Guest
I've been working on FreeNAS for years, but my home fileserver has always been a straight FreeBSD box. Thanks to iXsystems providing me with a dedicated FreeNAS box I've cut everything over to FreeNAS 9.1.1-RELEASE.

I'd like to document some of my experiences.

1) It's vitally important that you get working email from your FreeNAS box. The alert system is great if you happen to be looking at the GUI, but you're going to want it to be able to send you email alerts. Be sure to give smart an email address to send to as well.

2) Set up smart test for your disks. I recommend a weekly short test and a monthly long test. These have no performance impact, as the disks prioritize normal IO over the tests. If a disk fails a test, even if the overall smart status is "Passed", strongly think about replacing it.

3) If you are using ZFS, create a dataset called syslog. This is a magical name, and the system will move syslog there, giving you persistent logs. (Requires a reboot)

4) The plugins are easy, and plex rocks. A caveat is, if you are on DHCP the jails will oftentimes grab a range of IPs to use that are in the DHCP range. This can lead to later IP conflicts on your network. To address this change the jail ip range to something outside the DHCP range.

5) Use datasets for your samba shares, and set up snapshot tasks for them. FreeNAS will auotmagically configure samba to pass through the ZFS snapshots to samba. Windows will then activate the previous versions feature, available if you right click properties for any file or directory.

6) Give ZFS enough redundancy to not only find, but fix filesystem errors. Use RAIDZ or mirroring. For really important data drop to the command line and zfs set copies=2 <pool>/<dataset> Give ZFS as many chances as possible to not only detect errors but fix them. It's good at keeping your data safe if you let it. Be religious about running scrubs. Scrubs will help you detect hardware going south, and if run early enough can fix problems.

7) If you really really don't care about your data, for instance doing builds or rendering scratch space, do zfs checksum=off <pool>/<dataset>. If you are using NFS, you can also do zfs set sync=disabled <pool>/<dataset>. These sacrifice data integrity for speed, so please be mindful of where you use them. (In the first case zpool scrub is helpless to even detect errors, let alone fix them, in the second case an NFS client or server crash can cause corrupted data.)
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Thanks for the tip about syslog. I was unaware of that feature.


Sent from my phone
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Any other secrets like "syslog" you care to share? That should have been documented. It could have been very handy for many users.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Does the "syslog" dataset need to be mounted at a specific location? Or as long as it is named "syslog" it'll be found? I.e. "zfs create pool/some/path/syslog" will work just as well as "zfs create pool/syslog"?

Based on the commit it looks like any path will do, but I'd like to be sure before I start wasting time rebooting.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
Yup, any path ending in syslog seems to work. Based on the commit, any path containing syslog would probably work.
 

MaxManus

Cadet
Joined
Oct 17, 2012
Messages
9
How do I delete syslog dataset? :confused:

When I try to delete the data set, pops up the message:

Can not unmount 'tank / syslog': device bussy

syslog now uses 19.2 TB and file nginx-error.log.0 on 19,152 TB.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
How do I delete syslog dataset? :confused:.
This is a bit tricky as both syslogd and nginx (the web interface) are keeping files open in the directory. You need to do these steps via SSH (or via physical console) and not via the web console as we will be restarting nginx:
  1. unlink /var/log
  2. mv /var/.log.old /var/log
  3. service syslogd restart
  4. service nginx restart
  5. you should be able to delete the syslog dataset in the GUI now
syslog now uses 19.2 TB and file nginx-error.log.0 on 19,152 TB.
Does your syslog really consume 19.2 terrabytes of disk space? My nginx-error.log has size of exactly zero bytes. Check the content of the file to see what's going on.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I could be mistaken, but the logs are cyclic on FreeNAS. That is, they have a finite length and auto-overwrite the oldest entry when they hit that limit. So unless you've changed something with the logging parameters I'm fairly certain you don't have more than about 1MB worth of logs.

I'm also a little more than skeptical about how you came up with that number because even if the log was writing 100GB a day, that would take 192 days. And at 100GB a day, I'm fairly certain you'd start questioning why the zpool's disk usage statistic is growing at a rate far higher than expected. And if it created 19.2TB in 1 days, well, you'd be really upset at the poor pool performance as the pool scrambles to keep up with that kind of log writing.

So can I ask where you are getting this 19.2TB number?

Just for the record I have several systems that I admin, and they all consume less than 1MB for the syslog dataset.
 

MaxManus

Cadet
Joined
Oct 17, 2012
Messages
9
From Console to Supermicro I conducted the procedure, but it did not help. Possibly because syslog now uses all remaining disk space?

I have no idea why syslog now uses 19.2 TB or whatever the reason is. This is probably uncommon so I attach a image of my Active Volumes:
2013-10-13%2013.07.39.jpg
(https://dl.dropboxusercontent.com/u/85010433/2013-10- 13% 2013.07.39.jpg)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I will totally agree with you, it sure does look like it is using 19.2TB.

Can you go to the syslog directory and post the output of "ls -l"? Then go to syslog/log and post the output of "ls -l"? I'm wondering if someone moved files to the syslog on accident or something. I'm kind of puzzled at the moment.

On my server it shows syslong, 968.5KB used, 4.8TB available, and 4.8TB size.

Can you verify you are using build FreeNAS-9.1.1-RELEASE-x64 (a752d35) as shown in the FreeNAS webGUI?

I'm convinced that either it isn't using 19.2TB and its bugged or there is 19.2TB there because of something not related to the logs. But I'll wait for the outputs I asked for and see where to go from there.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
From Console to Supermicro I conducted the procedure, but it did not help. Possibly because syslog now uses all remaining disk space?
Any error messages? I just realized that if you use UPS then the upslog deamon will also keep a file open in the syslog directory.
To restart it run: service nut_upslog restart
If that doesn't help please post output of
fstat /mnt/tank/syslog/log/*
so that we can see what process is still accessing the files.
 

TheSmoker

Patron
Joined
Sep 19, 2012
Messages
225
To free up all that ace do
true > name_of_the_logfile
That will free up the space needed for maneuvering.
Also it will not interfere with the process nor with the logging process.

Sent from my iPad using Tapatalk HD
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Nice tip about syslog. Think I'll give that a try on a VM first.
 

MaxManus

Cadet
Joined
Oct 17, 2012
Messages
9
The server running FreeNAS-9.1.1-RELEASE-x64 (a752d35) without UPS (is first on my shopping list).

I first Observed yesterday that the disk was almost full. A few days ago I moved all servers to a separate VLAN. Could there be a contributing factor?

No no error messages and I get even, .... nginx: configuration file / usr / local / etc / nginx / nginx.conf test is successful Starting nginx. At the end of Dusan its procedure.But still pops up the message: Can not unmount 'tank / syslog': device bussy

From fstat /mnt/tank/syslog/log/* I get:

USER CMD PID FD MOUNT INUM MODE SZ:DV R/W NAME

Was that what you were looking for? If it were to generate a log file under tank/syslog/log I found no file there that were modified or created right now. My knowledge about the FreeNAS is self-taught and will thus have holes ;).
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
The empty list indicates that no process is keeping a file open anymore. However, the directories may still be open. Make sure that your current directory in the shell (any shell -- console, ssh, ...) is not inside the syslog volume -- the would also keep it busy.
Let's also include the directories in the fstat:
fstat /mnt/tank/syslog/log/* /mnt/tank/syslog/log /mnt/tank/syslog
 

MaxManus

Cadet
Joined
Oct 17, 2012
Messages
9
From Putty:
[stein@Ginnungagap] ~% fstat /mnt/tank/syslog/log/* /mnt/tank/syslog/log /mnt/tank/syslog

USER CMD PID FD MOUNT INUM MODE SZ|DV R/W NAME

After I rebooted the server 3 times, I got access to the syslog Share. It may affect my rights and thus result? The file Contains the managed copy of / mnt / tank / syslog / log out Onginx-error.log.0O If it is of interest.
 

Attachments

  • Ginnungagap.log.rar
    109.5 KB · Views: 363

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
I could always destroy the syslog dataset when fstat reported that there are no open files/directories. The problem in your case may be the huge size of the dataset. The files in the archive you attached are smaller than 1MB yet you say your dataset consumes 19TB. Do you have an aggresive snapshoting schedule? Maybe with the logs constantly changing and a very frequent snapshot schedule it's the snapshots that consume the capacity.
 

Cosmo_Kramer

Contributor
Joined
Jan 9, 2013
Messages
103
The empty list indicates that no process is keeping a file open anymore. However, the directories may still be open. Make sure that your current directory in the shell (any shell -- console, ssh, ...) is not inside the syslog volume -- the would also keep it busy.
Let's also include the directories in the fstat:
fstat /mnt/tank/syslog/log/* /mnt/tank/syslog/log /mnt/tank/syslog

I followed your directions and ran the above command and nothing was open. Do you have any other ideas?
Could jails be causing the dataset to be busy?
My particular syslog dataset is only 1.5 MB right now so I doubt size is an issue.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
You can try deleting the dataset via "zfs destroy -f <dataset_name>". Be careful and check the dataset name twice. For example, if your syslog dataset is mounted as /mnt/tank/syslog you would run zfs destroy -f tank/syslog.
 

Cosmo_Kramer

Contributor
Joined
Jan 9, 2013
Messages
103
You can try deleting the dataset via "zfs destroy -f <dataset_name>". Be careful and check the dataset name twice. For example, if your syslog dataset is mounted as /mnt/tank/syslog you would run zfs destroy -f tank/syslog.

I tried that and still no go.
Any other ideas?

Thanks for your time.
 
Status
Not open for further replies.
Top