Hot-plugable SAS Expander?

Status
Not open for further replies.

Phantum

Cadet
Joined
Jan 6, 2015
Messages
5
Long time lurker, first time poster. I've been Google'ing all day and have yet to come up with anything, maybe you folks can help me out.

So I've got an LSI 9211-8i (flashed to IT) in my FreeNAS box, connected to a Rackable SE3016. This Rackable unit houses 8x 3TB WD Reds and acts as my local 'archive' pool. All data here is then pushed to Crashplan using the plugin. What I'd like to do is bring this pool up once a week, scrub it, rsync over the data from my other pools (same box), push it to Crashplan and then turn the Rackable unit back off until next backup... all without bringing the whole NAS down.

How should I go about this? Do I have to export the pool and turn off the plugin every time I want to power off the unit? Then import the pool, turn on the plugin, do the backups, and repeat? Or do I really have to bring the whole box down just to turn off the unit and then bring the box back up (with the unit left off)? This 'archive' pool doesn't do anything that would require constant up-time and isn't accessible to any user other than root, who runs the rsync operation. And I realize that by not exporting the pool and booting with the unit turned off, FreeNAS will complain about the "missing" pool but it's not really a big deal because this pool isn't required for day-to-day operations. I just don't want to have to keep bringing FreeNAS down and up.

I've also tried connecting this Rackable unit to a FreeNAS guest on my ESXi box so that I could seperate the 'archive' pool/unit from my bare metal box and power up/down the unit as I powered up the VM. I have everything setup, but even with pass through, I quickly realized (via rsync operation from the pools on my bare metal box to the guest) that the performance of this setup was abysmal so that's out... unless anyone has any ideas on how to clear that up? If not then no worries because I'd much rather keep the whole operation on bare metal.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
You'd have to export it and import every time. No need to power off the server, in principle.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
You'd have to export it and import every time. No need to power off the server, in principle.

In principle. As a matter of practicality, it's sucky because of the export/import. But SAS is absolutely hot-pluggable.

I've also tried connecting this Rackable unit to a FreeNAS guest on my ESXi box so that I could seperate the 'archive' pool/unit from my bare metal box and power up/down the unit as I powered up the VM. I have everything setup, but even with pass through, I quickly realized (via rsync operation from the pools on my bare metal box to the guest) that the performance of this setup was abysmal so that's out... unless anyone has any ideas on how to clear that up? If not then no worries because I'd much rather keep the whole operation on bare metal.

What's the issue? In my experience, virtualizing a FreeNAS doesn't generally compromise performance all that much. Might even be faster in some specific cases.

Is it that it's slow having to do the rsync over the network? Or is the Rackable unit itself performing poorly?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I had performance issues with virtualized FreeNAS. My problem seemed to be that the CPU freq was going down to like 400Mhz when the server was idle. But when the server should have woken up due to workload, sometimes it wouldn't clock up. Sometimes it wouldn't clock up for 10-15 seconds. Not sure what the problem was, but I don't virtualize anymore so I don't care. ;)
 

Phantum

Cadet
Joined
Jan 6, 2015
Messages
5
In principle. As a matter of practicality, it's sucky because of the export/import. But SAS is absolutely hot-pluggable.



What's the issue? In my experience, virtualizing a FreeNAS doesn't generally compromise performance all that much. Might even be faster in some specific cases.

Is it that it's slow having to do the rsync over the network? Or is the Rackable unit itself performing poorly?

The former, network performance. My early-initial testing is showing that this poor network performance, to/from guest VMs on ESXi, isn't limited to the FreeNAS guest. And the Rackable unit performs great on either setup (for rsync within the setups themselves, e.g. local rsync from one pool to another). Rsync from bare metal FreeNAS box to ESXi guest was 20MB/sec.
 
Last edited:

Phantum

Cadet
Joined
Jan 6, 2015
Messages
5
I had performance issues with virtualized FreeNAS. My problem seemed to be that the CPU freq was going down to like 400Mhz when the server was idle. But when the server should have woken up due to workload, sometimes it wouldn't clock up. Sometimes it wouldn't clock up for 10-15 seconds. Not sure what the problem was, but I don't virtualize anymore so I don't care. ;)

haha yeah and it is this, amongst various other reasons that, I'd really rather not be virtualizing my precious data. I'm certainly an advocate for bare metal.
 

Phantum

Cadet
Joined
Jan 6, 2015
Messages
5
You'd have to export it and import every time. No need to power off the server, in principle.

Have you personally tried this? I mean, I've got the 'archive' pool backed up to Crashplan and to the best of my knowledge the integrity of the pool is intact so I could give it a shot; turn off the plugin, export the pool and hit the power on the SAS unit (with fingers crossed). And then when next backup comes around I'd just turn the unit on, import the pool, run a scrub, rsync the new/changed data from the other pools, then start the plugin and hope it connects to the client and backs up... it'd just be nice to know if this is acceptable or standard procedure for a setup like this before experimenting with my precious data.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Have you personally tried this? I mean, I've got the 'archive' pool backed up to Crashplan and to the best of my knowledge the integrity of the pool is intact so I could give it a shot; turn off the plugin, export the pool and hit the power on the SAS unit (with fingers crossed). And then when next backup comes around I'd just turn the unit on, import the pool, run a scrub, rsync the new/changed data from the other pools, then start the plugin and hope it connects to the client and backs up... it'd just be nice to know if this is acceptable or standard procedure for a setup like this before experimenting with my precious data.
No, I haven't.

If the pool is properly exported and nothing tries to access it, this process can work. Whether it's a reasonable solution long-term (a lot of manual tinkering) is another question, and I share jgreco's pessimism on this point.
 
Status
Not open for further replies.
Top