Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Relocate Jails to SSD helping HDD sleep?

Status
Not open for further replies.

mka

Member
Joined
Sep 26, 2013
Messages
107
Hi,

ATM my FreeNAS server is running 24/7 although I don't need it to be—just because of better ARC performance I let it run continuously. At about 3am the last PC has finished it's backup and the server won't be accessed until about 7pm because nobody is home; during that time I want to safe energy and expect the drives from my pool to power down.

I set the drives to spindown after "30min" via the FreeNAS WebGUI an this has worked initial. But now it won't and I'm fairly certain it's because I recently created the jail location at my main pool.

On that jail is currently only the Plex server as a pbi/plugin running but it is probably write logs and stuff to its system directories keeping the whole pool from spindown. I disabled the timed checks for changed media contents by the PMS as well.

I've 6 Western Digital Red Drives and one spare 64GB SSD. I'm considering to relocate my jail/Plugin path to that drive. Can this been done in a non-destructive way, so the PMS is running and stay compatible in what the WebGUI expects as a jail location? Or could I just stop the server, move to the SSD location and try to edit the paths from the WebGUI?

I don't want to break compatibility with coming FreeNAS version or PBI updates for the Plex Server in particular.

Thank you!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You are correct that the jail is probably keeping your pool awake at night(pun intended) but there is no easy way to move your jail to a different pool. :(

There is someone that probably has the answers as to how you could do this from a VM. But I'll let him speak for himself if he wants to take on this adventure. I know you aren't the first person to have this issue, and if he did a How-To guide there would probably be others that would be interested. :)
 

Yatti420

Neophyte Sage
Joined
Aug 12, 2012
Messages
1,437
If you are willing to lose redundancy and use only the 1 ssd for your jails it should work..

Could you not just move the jail root to the newly created SSD ZFS single disk? If the SSD were to fail you would lose all jails and all of the pools installed?.. Would this brick the startup? Are jails 100% required to be "present" on startup?

Am I missing what happens with striping? Data isn't magically stripped across all pools? Will you lose redundancy on the raidz1/z2 pool (not sure of OP setup) if you simply add a single disk as a seperate pool?

If I understand the drive options you can't add the SSD into the existing western digital pool.. You could create a new single (jail) disk pool (ill-advised) ofcourse.. What if you mirrored the SSD for some extra safety?..

Am I completely off base here? I was thinking a few big USBs could be used..
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yes, you can create the second pool. But you can't just cp the files over. FreeNAS' config has all sorts of ties to the location where the jails are and you are about to move it. FreeNAS then takes a dump all over itself.

If you know the ins and outs of the FreeNAS config file and possibly other places where the location is stored you have a chance that you could move your data to the new pool and just update the locations and reboot. But without knowing that intricate information, there is no moving of jails. Only destroy the pool and recreate it.

In all seriousness, letting hard drives sleep wears them out faster. But it's totally a personal choice an admin gets to make for his server. Even when I leave for the weekend I leave the server up.
 

Yatti420

Neophyte Sage
Joined
Aug 12, 2012
Messages
1,437
So moving a jail is out of the question but creating a new setup on a new device should be easily doable..

Agreed regarding drives sleeping.. I' in power saving mode at the moment so nothing is running during peak times.. I'd optimally like to store my jails elsewhere so I don't have to keep a drive up or keep it spinning up / down constantly.. Just haven't gotten around to redoing jails yet..
 

fracai

Neophyte Sage
Joined
Aug 22, 2012
Messages
1,212
This is actually what prompted me request tmpfs mounts in jails. You could redirect logging to the RAM disk and avoid spinning up the drives.

The same could be done by mounting a path from the SSD in the jail.

This leaves the jail in the pool, but requires tracking down the activity that is leading to the insomniac access. This wouldn't work if the access is from reads, like refreshing a media library.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
This leaves the jail in the pool, but requires tracking down the activity that is leading to the insomniac access. This wouldn't work if the access is from reads, like refreshing a media library.

Which I believe Couch Potato,Plex, Sickbeard, and one other(I forget) do constantly.

I think I wrote a comment on that ticket. I'm not sure its a good idea because it'll mean more RAM must be allocated to temp and we're already at 8GB as the established minimum. How much stuff are we gonna put in RAM before we'll be needing 16GB minimum? LOL
 

Yatti420

Neophyte Sage
Joined
Aug 12, 2012
Messages
1,437
So creating a new zpool with an SSD is indeed the safest way to proceed? Or for optimal performance format the ssd as UFS? Then recreate pool install / setups on the SSD? I believe a few people store their jails in this way.. What about if I were to disconnect the SSD (or USB etc) after the fact would it brick a FreeNAS boot because no jails can be loaded?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
you can't put jails on UFS. ZFS is your only option.

I'm not sure what would happen to FreeNAS if the jail disk were to be removed/fail. Probably nothing good though!
 

Dusan

Neophyte Sage
Joined
Jan 29, 2013
Messages
1,165
I'm not sure what would happen to FreeNAS if the jail disk were to be removed/fail. Probably nothing good though!
Actually, nothing nasty happens. The jails won't start of course, but FreeNAS will continue running without problems.
Yes, you can create the second pool. But you can't just cp the files over. FreeNAS' config has all sorts of ties to the location where the jails are and you are about to move it. FreeNAS then takes a dump all over itself.
In reality, the config DB contains only single reference to the jails location and you can change it via GUI. The reason why you can't just cp it over is that the jails (warden) uses ZFS features to use disk space effectively (that's the reason you can't use UFS with FreeNAS jails). One plugin jail consumes about 700MB of disk space. With 5 plugins/jails you would waste 4*700MB=2.8GB of disk space by basically identical files. Instead of creating copies, warden does a snapshot of the plugin template dataset and then creates individual plugin jails as ZFS clones of the template snapshot. If you cp this to a new location you will lose the snapshot/clone links. Therefore you need to use ZFS replication to relocate the jails. Another small complication is that warden explicitly sets the template dataset mountpoint.

Enough theory, this is the "Relocate jails how-to" :) :
Assumptions:
  • The pool you are transferring the jails from is main_pool
  • The destination pool is ssd_pool
  • The jail root (Jails->Configuration) is /mnt/main_pool/jails
  • The new jail root will be /mnt/ssd_pool/jails
Steps:
  1. Turn off all plugins (Plugins->Installed)
  2. Stop all jails (Jails->View Jails)
  3. Run these commands via CLI:[PANEL]zfs snapshot -r main_pool/jails@relocate
    zfs send -R main_pool/jails@relocate | zfs receive -v ssd_pool/jails
    zfs get -rH -o name -s received mountpoint ssd_pool/jails | xargs -I {} sh -c "zfs set mountpoint=/{} {}; zfs mount {};"[/PANEL]
  4. Change the Jail Root to /mnt/ssd_pool/jails (Jails->Configuration)
  5. Start jails/plugins
  6. Check that everything works and destroy the original jails dataset (main_pool/jails)
 
J

jkh

Guest
That's a pretty good template for moving jails. Of course, if this were my system, I'd just create a new pool on the SSD(s), set the jail root to be a dataset on that new pool, and then recreate all of my jails (or reinstall all the affected plugins) from scratch. I would do that because I strongly suspect that all of the work to recreate things from scratch would still occupy less wall time than trying to cleverly migrate / move things across. :)
 

Dusan

Neophyte Sage
Joined
Jan 29, 2013
Messages
1,165
Yeah, it depends on the situation. However, with several more complex plugins (ownclound, ...) this will be much faster, especially after somebody already prepared the steps ;), than configuring everything from scratch (if you can even remember all the config changes you did). It even benefits the simpler plugins. For example, migrating transmission this way will preserve all incomplete transfers, while with new install you may have to start everything anew.
I actually just added a new item to my list of dev things -- to add a "Relocate Jails" button to FreeNAS :).
 
J

jkh

Guest
A relocate jails option would be awesome! Ideally, any individual jail should also be relocatable. Presumably, we could find a situation arising in the next couple of years where people have used jails to do virtual hosting for customers or otherwise stuff quite a bit of... stuff... into a jail and now need to move just that jail to another pool. If you think of jails as poor man's VMs, then we basically want the equivalent of vmotion for jails!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'd even be okay with something that shuts down the jails, moves the jails, then starts them instead of them having to be online and operating during the transfer.
 

Dusan

Neophyte Sage
Joined
Jan 29, 2013
Messages
1,165
I'd even be okay with something that shuts down the jails, moves the jails, then starts them instead of them having to be online and operating during the transfer.
I just need to wrap the steps above in python and hook it into the GUI.
If you think of jails as poor man's VMs, then we basically want the equivalent of vmotion for jails!
Hmm, python won't be enough here :D. Migration with full virtualization is "easy", however I'm not sure how would this work with jails -- the jails & the host share the same kernel and no resources are virtualized (well, maybe except VIMAGE). You would need to take all the relevant kernel bits & pieces and "implant" them into another kernel. I'm not sure it's even feasible -- for example, how would it handle a case when the PID of a transferred process is already in use on the new system?
 

Bmck26

Member
Joined
Dec 9, 2013
Messages
44
Is it possible to run Jails from USB drives? I have the same problem as mka except I don't have any spare SATA ports left to use for SSD jails.

My server never powers down which I assume is b/c I have plex and transmission plugins running and plex is constantly refreshing the library. I'm the only person who uses the server so there's no point in it using more power than necessary from the time I go to bed to when I get home from work in the evenings around 6pm. It's cobbled together from left over parts so its not the most energy efficient build to begin with but I don't care how much it uses when it's being utilized. I get good transfer rates on LAN (110 MB/s peak on CFS share) so I'm please with the performance. I just want it to go to sleep when I'm not at home.
 

Bmck26

Member
Joined
Dec 9, 2013
Messages
44
Yes, but you need to create a ZFS volume on the USB drive.

Thanks, I'm still pretty new to FreeNAS and I'm still learning about it's functions and capabilities. I'll try the guide you posted earlier for the Jails relocation. I have some spare 32Gb USB 3.0 drives that I can use for this.
 

MaIakai

Member
Joined
Jan 24, 2013
Messages
25
God I've love a "relocate Jail" option

I'm prepping a complete desctruction of my ZFS pool. Started off with 3x 320gb drives, since then I replaced them with 3x 500gb, then 2TB drives.
So now i'm not 4k aligned and performance sucks. About to add more drives to the mix and move to raidz2, currently rsyncing my 1.8TB of data to a usb drive, this will take awhile.
 

rm-r

Member
Joined
Jan 7, 2013
Messages
166
I have some spare 32Gb USB 3.0 drives that I can use for this.
usb 3 is currently disabled by default
 
Status
Not open for further replies.
Top