Recommendations Before Going Live

Status
Not open for further replies.

apolonio

Dabbler
Joined
Apr 17, 2018
Messages
14
I have been running through my build and giving a few things a run through and wondering if I should do anything else before going live.

So far I have
  • Tested hot add of drives
  • Did a bit of burn in with each of the drives in my RAID
  • Tested the UPS (I have it shutting down in 2 minutes after power loss)
  • Tested backing up and restoring on my backup NAS
  • Documented drive locations and serial numbers
  • Email notifications work
  • Nagios (another system) does basic monitoring
  • Pemissions work
  • Got cold spare of drives (boot and my one RAIDZ2 pool)
  • S.M.A.R.T works (or at least reports what I expect)
  • NFS works
  • Samba works
  • Able to restore snapshots
Couple things I did not do yet
  • Test booting up of a the mirrored boot drive
  • Set up rsync to another box (CentOS Linux)
  • I would like to get iSCSI working, but that is low priority for me
What really helped was defining the scope and lifespan of this server. What is important to me is data integrity, then security, and finally performance. I originally wanted it to also do plex, dhcp, dns and mail.

Any other recommendations before I blow my datasets away and repopulate and go live?

Thanks
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The forum rules request a hardware layout, mostly so we can try and catch misc. things, (and even the ocasional Whoper!). Some people put such hardware (and software) information in there forum signatures.

You might mention whether you have Pool scrubs and SMART tests croned. We tend to have various schedules for things. For example, for Enterprise disks, people say monthly pool scrubs are fine. But with cheaper disks, every 2 weeks. For larger pools, it's understandable if that is extended due to duration of the scrubs. Meaning if a scrub takes 4 or 5 days, you may want to limit it to 3 weeks.

Next, what do you have for boot device(s)?
Small SSDs, (and even used ones), tend to work better than USB flash, even Mirrored USB flash.

Another note. If you have experience with Linux, (like the CentOS reference), then you may want your pool bi-OS feature compatible. Simply boot Linux off live or install media and check if your pool is importable, (like Antergos distro). Today features between ZFS on Linux and FreeBSD/NAS with ZFS are not too different. But at one point I was unable to use a pool on Linux that was made with FreeNAS, (the multi_vdev_crash_dump feature, if I remember correctly). This gives you options in the future.

It also helps if you use the ZFS terminology. You mention "drives in my RAID". Hopefully you mean ZFS managed RAID, like RAID-Zx or Mirroring. In this forum was tend to be nit picky about that issue, because some people actually use hardware RAID LUNs with ZFS. But don't have proper monitoring of the hardware RAID LUNs. Then they come here when they loose data. Thus, if you specify your pool layout in hardware / software layout, we may be able to help further.
 

apolonio

Dabbler
Joined
Apr 17, 2018
Messages
14
Here is a post on my setup from a few months ago.

https://forums.freenas.org/index.php?threads/supermicro-freenas-server.68999/

Since then
  • I have gone to mirrored 60GB SSDs for the boot drive.
  • I got a 1300VA UPS in
  • I had some consumer grade hardware that is laying around I would like to build in to another FreeNAS to do occasional test restores.
In Linux I used to scrub with
Code:
#!/bin/bash
for f in /sys/block/md? ; do
	echo check > $f/md/sync_action
done


Looks like there are some scrub options in the zraid as well. I shall set that up.

I called it RAID, but it is a zpool. The disks do NOT go through a RAID controller.

I do recall setting up a RAIDZ2, how to I confirm that it is indeed so?

BTW that burn in I mentioned that I ran came from somewhere in FreeNAS
Code:
smartctl -t short /dev/sda
smartctl -t conveyance /dev/sda
smartctl -t long /dev/sda
badblocks -b 4096 -ws /dev/sda



Thanks for the advice.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
Badblocks is great, use tmux command before running badblocks so you can test all your drives at the same time

Badblocks take about a day per TB, a 3TB will take about 3 days to test, then do another long SMART test

When booting I think Control-C gets into the controller and you will see IR or IT

I would also setup periodic scrubs, short and long smart tests, email notifactions

Have Fun
 

apolonio

Dabbler
Joined
Apr 17, 2018
Messages
14
I am more of a Linux guy, so prior to the FreeNAS I installed CentOS 7 on one of the spare SSD I had. So a lot of my documentation, prepping, and testing was done in Linux.

I used screen instead of tmux, hdparm to check drive info, dmidecode for other information. dmesg and lsusb to get info on the UPS. I just redirected output to a file just in case I need that info later.

After I installed freenas, I was able to translate some of the info to FreeNAS. Some concepts are different like LVM vs ZFS. But I am pretty comfortable on the command line.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@apolonio, you can use zpool status from the command line to verify pool layout, (like if it's RAID-Z2). Note that ZFS allows more than one RAID-Zx construct per pool. We call them vDevs for virtual Devices. So don't be confused if your "raidz2-0" has that trailing number. Many lower end hardware RAID cards do not allow this, and force all disks to be in a single RAID-5/6 set.
That command also tells info about scrubs, (named "scan:" in the output);
  • In progress or complete
  • If complete, when it completed
  • Various error counters
As for starting scrubs, manually it's with zpool scrub POOL_NAME. The GUI has places for you to automate both Pool scrubs and SMART tests.

The mirrored SSDs sound fine for boot drives. (Perhaps over-kill in Mirroring and size, but adds to the reliability.) One thing you can do for SSDs, is reduce their offered size to 40GB. In Linux, it's the hdparm -N command and option. This allows the drive to KNOW that it has more spare sectors and wear leveling area. May make a different in it's life expectancy if those SSDs are used.
 

apolonio

Dabbler
Joined
Apr 17, 2018
Messages
14
Thanks for all your help, got the scrubbing set. Tomorrow I do the initial sync and Monday I go live and put the original data in read only for a couple of weeks before decommission.

I will post another question regarding scripting a backup to USB drives.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
I will post another question regarding scripting a backup to USB drives.
In the resource section of this forum, I wrote about how I do backups of my FreeNAS to locally attached disks.
It's not meant as a formula for people to follow, but as an example and starting point. The backup script is also included.
Here is a direct link;

How to: Backup to local disks


But please, you can start another post on your backups.
 

apolonio

Dabbler
Joined
Apr 17, 2018
Messages
14
I recall seeing that post, also saw a new one regarding hot swap bays

https://forums.freenas.org/index.php?threads/using-hotswap-disks-and-on-demand-storage.70863/

Not sure if this is possible, but I would like to do two backups, one to an unencrypted USB drive then cp that file to an encrypted usb drive.

I will take a look at your script, but another option, since I am more familiar with Linux is to start the job from linux, here is the psuedo code

Code:
  1. On local linux box mount unencrypted and encrypted drive, to /mnt/usb and /mnt/e-usb
  2. export via nfs those unencrypred mount point
  3. ssh to freenas execute nfs mount
  4. ssh to freenas and create snapshot
  5. ssh to freenas and vfs send snapshot to mount point
  6. ssh to freenas and umount nfs mount point
  7. ssh to freenas and destroy snapshot
  8. create md5sum of dataset snap and copy md5sum and snap to encrypted drive
  9. umount both
  10. detach usb and store safely, encrypted on going offsite


Of course steps 3 to 7 can be a single custom script the resides on the freenas server.

I would also have the luxury of testing a restore on to the consumer grade hardware freenas server.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First, FreeNAS is generally the NFS Server, not client. Client side NFS may work on FreeNAS, it's just not common.

You don't mention what type of file system the 2 backups will be on. It can be raw, but it's likely you intend to use a file system. Linux with ZFS works quite well. I use it all the time on my 3 home computers, (desktop, laptop, media server). Plus, I transitioned my backup disks to use ZFS since I like the fact that the data is checksumed and verifiable, (with a scrub, or just read the data). This is even for backups that reside on single disks. With multiple backups, if one disk looses data during a restore, as long as another backup has it, I am good. However, if you are not familar with OpenZFS or ZFS on Linux, it may be beyond the time you want to spend learning. (I learned about ZFS back in 2006 on Solaris 10...)

Next, what type of encryption?
Linux LUKS?
GNU GPG? (I use this one)
The type affects how you store the data.

Now to the type of backup. There are LOTS of choices as it's Unix. I personally use Rsync for both my client computer backups to my FreeNAS. And from my FreeNAS to it's backup disks. (I don't need the out of date snapshots, nor un-mounted datasets in my backup images.)

ZFS Snapshots are good for data that is changing. You can take a snapshot, and get stable copies of the data. ZFS Send does require a snapshot as a starting point. But, if there is nothing writing on the source, and you don't intend to use ZFS Send / Receive, then you may not want a snapshot.

If you use ZFS Send to a non-ZFS filesystem, it simply ends up as a giant, all-in-one file, (TAR file concept). Fully restorable, just not as common a backup scheme as most, which use ZFS on the target to re-create the source's datasets & Zvols. You can even transmit a ZFS Send stream through SSH back to your local Linux computer to a EXT4 partition using;
ssh root@MY_FREENAS zfs send -Rpv MY_POOL/DATASET@SNAP | \
dd of=/mnt/usb/MY_FREENAS.backup.zfs


Anyway, there are options.

And yes, you can use hot-swap disk slots. Or USB disks. We discourge people from using USB disks as a permanant part of a ZFS pool as the USB interface is less reliable that SATA or SAS. But, for backups, I have found USB disks to be quite suitable. Even when I used ZFS on Linux.
 

apolonio

Dabbler
Joined
Apr 17, 2018
Messages
14
For the USB disks, they would be ext4 and use LUKS or truecrypt/veracrypt. I am not opposed to running ZFS on Linux either, I gave it a try with CentOS 7 but I am doing something wrong and am not loading the zfs kernel module.

I still would like to script mounting and encrypting a backup to two USB drives that has ZFS.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Assuming a few things;
  • FreeNAS does not have snapshots you need or want to backup
  • Your backup disks are large enough, without worrying about ZFS compression
  • Your ZFS dataset properties are not complex
you can use Rsync.

Simply setup your FreeNAS with a Rsync share. Setup your backup disks as appropriate, (1 EXT4 and 1 LUKS with EXT4). Then backup your FreeNAS with Rsync. You can even perform incrementals with Rsync as normal.

Much more straight forward for a Linux person.


That said, ZFS native at rest encryption is coming to Linux soon. It's in the current Git master. There are differences, including only 1 slot for password or key. But, some of the issues with ZFS on top of LUKS, (or FreeBSD's Geli), can be problematic. Like out of order writes, and ZFS un-managed write cache. None of which are a problem, if there is no crash. (ZFS has GREAT crash handling.)

Last summer I played with ZFS native at rest encryption. You can have the entire pool encrypted. Or just a dataset or Zvol. There is even a feature of using ZFS Send with encryption so that the data is always secure.

But, it's not ready for production. Just letting you know it's coming.
 
Status
Not open for further replies.
Top