What is swap and why am I out?

Poppa

Explorer
Joined
Jun 3, 2017
Messages
87
I suddenly started getting these "swap_pager_getswapspace(2): failed" errors and then started getting more of them but with different numbers. Then I started getting "swap_pager: out of swap space".

5o5vaj.jpg


Then I couldn't log back into my server anymore via the webUI. I tried shutting it down from the console but it wouldn't respond to anything I typed. So I tried doing a shutdown by pressing the power button but it just said "Stopping cron." (whatever that means) and "Waiting for PIDS: 4303". I got tired of waiting and did a hard shutdown.

4o0lo0.jpg


This isn't my first FreeNAS server. I haven't seen this error before on my other server. Can anyone shed a light on this issue?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You have a hard drive failure.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The reason you are getting the errors you are getting is because of this error:
1575768302148.png


We need to know what version of FreeNAS you are running and when you boot int back up, we can do some further troubleshooting, but you will probably need to be prepared to replace one of your data drives.

Here is a guide to tell you some of the information we will need to know about your system to help with troubleshooting:

Forum Guidelines
https://www.ixsystems.com/community/threads/forum-guidelines.45124/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The failed drive is likely the cause of the swap problem. We need to know more about your pool layout so we can figure out what the problem is.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I have already replaced the failed drive.
Since you have replaced the defective disk, are you still experiencing trouble? Is the system working properly again?

Usually, with the modern versions of FreeNAS, you don't get the loss of swap problem unless you have two disks that have been changed since the system was restarted. Swap partitions exist on all storage drives, unless you did something to change that default configuration, but the OS will only use ten of those partitions which it chooses at boot. The way the swap partitions are used is in mirrored pairs, so two of them would need to be affected by something before a mirror would fail. If something happens to a swap mirror that the system is using, it doesn't have a graceful way to recover from that. If you were replacing disks through hot-swap, you may have replaced disks that were being used for swap without realizing it. The FreeNAS rebuild process, while it does recreate the swap partition, does not restore the usage of the swap partition.

I am only able to guess where your problem is because you are not sharing very much information with us.
Raid Z3, 3vdevs, 16x6TB, 16x6TB, 16x10TB
This pool layout is not very good for performance. I would have suggested keeping each vdev to RAIDz2 and making them 8 disks wide instead of 16. I had a system at work where I was able to do some testing (same hardware, just different pool configuration) and using four vdevs of 16 disks vs using eight vdevs of 8 disks, the performance change was significant, more than double with the narrower vdevs.
 

Poppa

Explorer
Joined
Jun 3, 2017
Messages
87
Since you have replaced the defective disk, are you still experiencing trouble? Is the system working properly again?

I haven't turned the system back on since the hard shutdown. I will let you know what happens when I do.

you don't get the loss of swap problem unless you have two disks that have been changed since the system was restarted

This was a brand new FreeNAS install. Installed FreeNAS on the system with just the two vdevs of 16x6TB and then it sat unused for months. I installed the remaining 16 10TB drives, powered on the system, upgraded to the latest version then expanded the pool with the new vdev. Started using as normal.

I only had warnings about the one drive so if another drive was failing/failed, FreeNAS hasn't given me an alert about it. I had not hot swapped any drives.

I am only able to guess where your problem is because you are not sharing very much information with us.

Sorry. The system was off at the time and that was all I the info I knew. What info was I missing?

This pool layout is not very good for performance.

So when you say that it isn't good for performance, is that just system performance or does that affect transfer speeds because I was only getting about 50MB/sec on a 10Gbit direct connection?

I don't know how to reconfigure the pool without destroying it and stating over which means I would have to find a place for the data I already moved over.

Also, wouldn't smaller vdevs mean a loss of even more usable space?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So when you say that it isn't good for performance, is that just system performance
IOPS. The more vdevs you have, the more potential for IOPS. So it is a bit dependent on the type of use you will put the system to. If you are just doing cold storage of movies for Plex, you are probably fine, but if you want to be able to do fast random IO, it will be very slow.
does that affect transfer speeds because I was only getting about 50MB/sec on a 10Gbit direct connection?
What kind of sharing? In the testing I did at work, I had SMB sharing for one test and I did iSCSI for another test. What is the performance goal?
Also, wouldn't smaller vdevs mean a loss of even more usable space?
The capacity difference between one vdev of 16 drives in RAIDz3 and two vdevs of 8 drives in RAIDz2 is one drive, but the performance difference is roughly the difference between a single drive and a pair of drives in a stripe set. More vdevs generally equals more IO performance.
 
Top