Snapshots not removing

Status
Not open for further replies.

kjstech

Dabbler
Joined
Feb 27, 2014
Messages
15
We re-purposed 2 older Dell R200 1U servers for FreeNAS storage which is intended to hold Veeam backups. The server only can hold 2 drives, but I do have a 32GB SSD squeezed in there which it boots FreeNas 9.2.0 x64. The server has 4GB RAM and contains 2 4TB WD RED SATA drives in Raid-0. We are using Raid-0 so we can get a volume size large enough to hold our backups.

The second nic is connected directly to the second nic of the other server and they run ZFS replication over that interface to keep that traffic separate. Since the one chassis can't hold enough drives for RAID-5 or 10, this is sufficient for now and it is achieving its goal if one system goes down, I have the other that I can still use while I repair the original.

Now this used to have around 7.1 TB maximum space but I noticed today that I'm only showing 4.10 TB maximum space. I went in and noticed that there are 9 snapshots and I'm thinking maybe that is eating up the space. The thing is that under Periodic Snapshot Tasks I have it create 1 snapshot a day and keep it for 1 day. Its enough to snapshot after all the nightly backups have completed, and then it replicates to the other appliance. Data only is written at night, so might as well snapshot it the next morning after the backups have been stored.

So this was working great for a few weeks but now why are the snapshots not deleting? Is that what reduced my maximum capacity on the shared volume from 7 TB to 4.10 TB?
 

Attachments

  • snapshots.PNG
    snapshots.PNG
    44.5 KB · Views: 253
  • periodic snapshots.PNG
    periodic snapshots.PNG
    15.5 KB · Views: 238
  • diskspace.PNG
    diskspace.PNG
    15.1 KB · Views: 233

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You need at a bare minimum 8GB of RAM for FreeNAS. It is possible something is running out of space, but no one here has much good experience with platforms that do not meet the hardware requirements.

There is a guide to selecting appropriate hardware in the hardware forum stickies.
 

kjstech

Dabbler
Joined
Feb 27, 2014
Messages
15
Ok I thought 4GB was supported. It was working for two months now, no reboots and I can tell you at least 4 times faster than a similar setup running Openfiler.

I just checked the destination unit which recieves the ZFS replication. It had snapshots since early Feb. I deleted them all and that maximum capacity is growing 6TB and counting. So maybe the remote side, which only had 38GB free space hung up the sending side.

Is there a script or setting I can put in the ZFS Replication destination unit to keep the snapshots cleared out so there's no more than 1 there ever?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It isn't clear where you got the idea that 4GB was supported. If you can tell me where, I'll be happy to see if I can update that.

The minimum for FreeNAS 8 had been listed as 6GB up until about a year ago, at which point I and several others noted that people with 6GB configs were sometimes reporting panics and sometimes pool loss. We had also noted that the sorts of problems being reported never seemed to come from people with 8GB or more. The 8GB minimum is a pragmatic selection based on that observation. It has never been 4GB that I can recall.

The snapshot replication system is pretty good at maintaining sync but if it loses sync due to a problem (destination disk full?) then you probably need to reset it. There's a section in the manual about how to reset replication. You should never let a ZFS pool fill more than 80% so "shame on you" ;-)
 

kjstech

Dabbler
Joined
Feb 27, 2014
Messages
15
Ok well the destination disk is no longer full as I deleted the snapshots on the destination. I guess I have to look at creating some kind of maintenance script and put it in cron on the destination to keep old snapshots cleaned up. On the source side, the periodic snapshot tasks has keep snapshot for 1 day. But there is no periodic snapshot task on the destination since that is just a receiver of data.

So I think what happened is snapshots were never cleaned up. After about 26 days of successful operation, the destination filled up and the source side is stuck on 2/18/2014's snapshot (we do 1 a day) since it was not successfully transmitted to the destination. So now that I cleaned up space on the destination, how can I restart this? Note I cannot delete these snapshots (2/18 through 2/26) because it says "Held by replication system". I know there is a command line option like zfs send... but I cannot track down what the proper syntax would be to apply to my environment.

As far as the RAM requirement, I may have overlooked it or confused it with another product. We were an Openfiler shop for quite some time. After getting tired of their stagnant - lack of development, I wanted to try FreeNAS. I used it years ago before the web user interface looks like it does now. I'm not sure what version that would of been but think back to 2008 time frame or so. So after trying FreeNAS, I have to say I am more than impressed. First off its WAY faster than Openfiler, even on similar hardware and even SATA servers are faster with FreeNAS than my one Openfiler that has 300GB 10k rpm scsi drives. So Kudos to the FreeNAS team. I will try to locate some memory. These Dell R200's can max at 8GB DDR2, so thats 2GB sticks in 4 slots. I found a ton of extra memory in our computer room but the DDR2 is not ECC, and the rest is DDR3, so it looks like I may have to buy some. Sorry for the confusion on the RAM requirements. We can button that down. I guess after a month of successful (and better than Openfiler performance) we assumed that we were good.

Thanks for your help.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I already tried to point you at the manual. http://doc.freenas.org/index.php/Replication_Tasks

If that fails you, then by all means provide details and we'll get you sorted out. But those steps usually work.

The big problem with 4GB and 6GB is that there are a lot of stories out there about how well it works, right up to the point where it doesn't, at which point there are sometimes sadness, headaches, and/or pool loss. The smart bet is to give the system what it really needs and then you won't be living in the worry zone.
 

kjstech

Dabbler
Joined
Feb 27, 2014
Messages
15
Yes and none of those commands work. So I just want to start over.

On the source side, even if I change Periodic Snapshot Tasks to false and ZFS replication to false, in ZFS Snapshots I cannot delete the outdated ones. It says "Held by replication system".

I just wanted to get to the most recent one and then try a full push to the destination. They are racked right next to eachother and connected via a 1gbps nic on their own ip addressing.

I'm going to try to reboot the push unit itself (We call this fnbackup1 by the way, so I will refer to it as that from here on out).
 

kjstech

Dabbler
Joined
Feb 27, 2014
Messages
15
I got all the snapshots deleted from 2/26 through 2/19 but the one from 2/18 will not delete. auto-20140218.0800-1d used 1.70T Refer 3.58T and when I click delete it says "Held by replication system".

I don't know how that's possible because in the snapshot tasks and zfs replication, I turned those jobs to false and even rebooted the system.
 

kjstech

Dabbler
Joined
Feb 27, 2014
Messages
15
Oh I had to do it via command line:

zfs list -H -o name -t snapshot | xargs -n1 zfs destroy


Now I can create a new snapshot and hopefully start the whole thing over.

zfs list -H -o name -t snapshot | xargs -n1 zfs destroy
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Right. Once everything is cleaned out you should be able to start fresh and it should be fine.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I ran into the same error. I just ran zfs destroy -r tank2/datasetname (the -r was needed to remove the "children"/snapshots).
 
Status
Not open for further replies.
Top