[How-To] Using Non Amazon S3 Cloud Storage (Wasabi)

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
Since CrashPlan has ended their home plans, I have been looking for most cost effective ways of backing up my data. In my searches I have stumbled upon Wasabi (https://wasabi.com/) and they charge slightly less than Amazon Glacier ($0.0039 / GB), which translates to about $3.99/ Month / TB

While I do have TBs of data, only about 600GB of it are what I would call "Critical". I don't need the fastest or most feature rich storage. I just need something that is reliable (Wasabi is pretty new. I really can't speak to their reliability). But this will cost me about $28.00/Year. They do charge $0.04/GB for re-downloading the data later, but I do not plan on using this.

Wasabi offers 1TB for free for 30 Days, so I thought I would give it a try! Originally I wanted to use s3cmd with the plugin, but I quickly realized this was designed for AWS only. I even tried to get into the plugin/jail and modify it, but it appears this version is quite old. In addition to this, I had trouble getting a newer version of s3cmd to work as well, but I think this might have been an isolated incident (As I found similar reports on github for this issue). So instead I used the aws-cli

(Will update/cleanup as time permits)
Step 1.)
To begin, you will need to sign up for the free trial. After logging in proceed to Profile -> API Access and jot down your "Access Key and your Secret Key". I believe secret keys can only bee seen once, so make sure to keep it somewhere safe. I actually create a user, made that user and admin and used that accounts Access Key and Secret

After this, you will also need to create a bucket. You can call it whatever you want such as "Backup"

Step 2.)
We will need a Jail for this. So go ahead and create one, and add storage to it that you would like to be backed up. I added /mnt/Backup.

In addition to this, we will need to install the AWS CLI application.
Code:
jls
#Grab the right JLS number
sudo jexec # tcsh

pkg update
pkg upgrade
pkg install aws-cli


Step 3.)
Now we need to configure the AWS S3 application. This is quite Easy
Code:
aws configure

Input your access key, your secret key and then leave the next 2 prompts blank.


Step 4.)
Test your application!
Code:
aws s3 ls --endpoint-url=https://s3.wasabisys.com


This should return your only bucket
s3://Backup/

If it does, everything should be working! If not, something else is going wrong.

Step 5.)
Lets push a file out to Wasabi from your Jail. Find a folder that's not too big to test. Something with just a few files. I used a directory called TestDirectory
Code:
aws s3 sync TestDirectory/ s3://Backup/ --endpoint-url=https://s3.wasabisys.com


Once this is done, I can log into Wasabi and see the files!


Future Additions
If I have time, I will add a section on creating email alerts and scripting this to happen nightly or weekly.
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
(Grabbing Space)
I am talking with Wasabi about the S3CMD issue on FreeBSd. s3cmd works without issue on Ubuntu. I will include a guide on s3cmd, gpg encryption and getting this setup as well if anyone is interested.
 

elcid

Cadet
Joined
Feb 16, 2018
Messages
5
icsy,

Just created a forum account to login and say thanks for the tutorial. I followed it and everything is running perfectly. I was able to create my own cron job loosely based off another tutorial provided by the folks over at Backblaze.

I faced the same issue with Crashplan dumping their peer-to-peer backup solution. Now, I use encrypted Resilio folders which is not ideal, but works in a similar fashion less versioning.

My setup is a little different from yours, using wasabi only as an online backup. My mounted storage in the wasabi jail are actually shared with Resilio Sync jail, however I have them mounted as read-only in case something goes wrong. The wasabi sync.sh script I'm using backs-up the top-level mount point under /media/store/, and I simply add additional storage mounts under it, e.g. /media/store/folder1/, /media/store/folder2/, etc. When my cron job runs, it'll automatically add any new additional mount folders into my wasabi bucket.

I wanted to comment that I added the --delete switch (per https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html) so as only to keep the most latest backup of my Resilio syncs on wasabi. Since Resilio handles a recycle bin under the hidden folder ".sync" at each top-level directory, I didn't feel the need to worry about files I delete and may need to recover later (although Resilio doesn't keep them for long). And I don't care about versioning on the wasabi side either, although that is a nice feature. So my sync command looks like this:
Code:
aws s3 sync /media/store/ s3://backup/ --delete --endpoint-url=https://s3.wasabisys.com


For my cron job, I created a simple sh script and stored it within the wasabi jail under /usr/bin/sync.sh
Code:
#!/bin/bash
aws s3 sync /media/store/ s3://backup --delete --endpoint-url=https://s3.wasabisys.com


Next I created a another simple script and stored it on the main filesystem under /conf/base/etc/wasabi_sync.sh
Code:
#!/bin/bash
jexec wasabi sh usr/bin/sync.sh


Which calls the jail to run the sync.sh script. Then, create a cron job that runs /conf/base/etc/wasabi_sync/sh however often you'd like.

However, one thing to consider about the --delete switch is your Timed Deleted Storage may increase. I've discovered syncing Resilio's encrypted folders doesn't appear to be a great idea. Although my /media/store is only 613GB (336GB encrypted folders, the remainder are my files which do not change often), wasabi is showing I've used 1.37TB of Timed Active Storage, and I've had the cron job run once per day for the last three days. So, I'm going to need to play around with this some more to keep my storage usage low. It might be a combination of how often the cron runs and perhaps just not bothering with syncing the encrypted folders.

So far, I am enjoying wasabi's free trial period but having an offsite store is easily worth the $4/month. I am fairly new to FreeNAS and unfamiliar with S3CMD and GPG encryption, but I'd be interested to hear your thoughts on it.
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
elcid,

I am glad you got Wasabi working! Since they have a min of 1TB, I decided to switch to BackBlaze for a while (which I have working in a jail using rclone). i currently only have ~400-500GB of items that I actually want backed up, so Backblaze is cheaper for me. I forget what the break even point was, but I think around 700-800GB Wasabi starts becoming cheaper per gigabyte. Currently I have to pay about $2.50/Month which is pretty good.

Once my backup amount increases, I will probably switch back to Wasabi. Feel free to post your findings. I will probably come back to this thread once my backups increase again.
 

Dadealus

Cadet
Joined
Mar 8, 2017
Messages
2
Running through these instructions today and I found they have changed the following

pkg install aws-cli

is now

pkg install awscli
 

tazinblack

Explorer
Joined
Apr 12, 2013
Messages
77
(Grabbing Space)
I am talking with Wasabi about the S3CMD issue on FreeBSd. s3cmd works without issue on Ubuntu. I will include a guide on s3cmd, gpg encryption and getting this setup as well if anyone is interested.
I would be very interested.
Thank you in advance!
 

jafisher2000

Cadet
Joined
Nov 7, 2016
Messages
5
>I'm getting aws command not found. pkg update, pkg upgrade, pkg install awscli all installed with no errors

Got it working
 
Last edited:

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
Did you trying using awscli instead of aws-cli as mentioned in the post above?

Running through these instructions today and I found they have changed the following

pkg install aws-cli

is now

pkg install awscli

I went with BackBlaze since I didn't have a TB of data. I will have to give this a try and see if something changed.
 

jafisher2000

Cadet
Joined
Nov 7, 2016
Messages
5
Please bear with me while I learn

I'm getting The user-provided path /FRS-Data-1/backups/ does not exist.

aws s3 sync /FRS-Data-1/backups/ s3://backup/ --endpoint-url=https://s3.wasabisys.com

My setup is /mnt
|_ /FRS-Data-1
|_ backups
_ Plex
 

ect

Cadet
Joined
Dec 11, 2015
Messages
4
hi some i have change the the new updates to 11.2 jail Now is the cronjob is not run need to add number ID not more hostname anyones know how to set now the control in system to run in the jail..


thank you
 

Nvious1

Explorer
Joined
Jul 12, 2018
Messages
67
If you are using 11.2, just create a cloud sync task.

1) Setup a Cloud Credential using your Access ID and Secret Key. Select Amazon S3 as provider. This is where you set endpoint to: https://s3.wasabisys.com
2) Setup Cloud Sync task. Name it, Select a PUSH or PULL, Select Bucket and set destination folder and select source, set encryption if you want, and a schedule.

You should be able to hopefully match an existing sync structure, but do testing.
 

RP@45

Cadet
Joined
Mar 1, 2018
Messages
6
When setting up the S3 sync through a jail, I'm getting an issue that is kind of a headscratcher to me.

When I run:
aws s3 ls --endpoint-url=https://s3.wasabisys.com

my bucket appears fine from the credentials I entered but when I run the sync command:
aws s3 sync /mnt/wasabi s3://BUCKET/ --endpoint-url=https://s3.wasabisys.com

I get the following error:
fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjectsV2 operation: The AWS Access Key Id you provided doe
s not exist in our records

I'm confused as to why the credentials would work for bring up the buck in the first command but then say it does not exist when I run the actual sync command.

Any insight into where i can begin to look to troubleshoot would be greatly appreciated.
 

RP@45

Cadet
Joined
Mar 1, 2018
Messages
6
Disregard my last comment, issue was with the endpoint-url not defining the region I was in.

Thank you all for these helpful steps on configuring S3 backup with Wasabi.
 
Top