ZFS advice please

Status
Not open for further replies.

HAL9000

Dabbler
Joined
Jan 17, 2013
Messages
19
Hi,
I am new to the forums, but have been building and using a FreeNAS server for almost a year.
System
16Gb memory
7x 2TB WD Green on LSI SAS 9207-8I (ZFS RaidZ2) Pool#1
3x 3TB WD Red on P8P67-M onboard SATA (UFS Raid0) Pool#2 - temporary

Pool#1 is at 7.6 TB capacity with 1.2 TB remaining.

I want to create Pool#2 and transfer all my data from Pool#1 temporarily. Then rebuild Pool#1 with 8X 2TB WD Green and transfer the data back from Pool#2.
(long-short story: had 8 originally, 1 died and was replaced, but not before I built Pool#1 with the remaining 7 HDs).

Eventually, Pool#2 will be taken down, reformatted as ZFS RaidZ2 and added to the total capacity of the FreeNAS server.

My question: is there a simple way to transfer the data from Pool#1 to Pool#2? Rsync? Replication? What is best for this job?

Thanks in advance for your advice.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If pool 2 is UFS as indicated, you cannot replicate the contents onto it.

However, the fastest thing to do might indeed be to use replication - from the command line. This is a little scary and you will not want to do it unless you really understand it.

ZFS replication works through snapshots, and then "zfs send" piped to something like "ssh remote zfs receive".

But you COULD instead do:

"zfs snapshot pool1@copy"
"zfs send pool1@copy > /mnt/pool2/pool1.zfssend"

at which point pool2 has a file containing a ZFS replication view of pool1. Then you wipe and rebuild pool1 the way you want, then do

"zfs receive pool1 < /mnt/pool1/pool1.zfssend"

and in theory you're all set to go, and since you haven't had to do many thousands of seeks on pool2 to create individual files, it could be a lot faster. But also a lot scarier. ;-)

The less scary way is to use rsync, or even just cp with the right arguments.
 

HAL9000

Dabbler
Joined
Jan 17, 2013
Messages
19
I can do scary:eek:, will give the zfs send / receive option a try after I burn the most important data to DVD. Majority of the data are backups of my work and personal computer OS and data, so no real loss if the copy fails unless these computers decide to crash before being backed up.

I thought about using zfs for pool#2 at the beginning, but then I would need more than 3 drives to get the capacity to hold pool#1 data.

Thanks again for the advice. I will post again with the results as soon as I get the new 3TB drives installed in the server.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
I thought about using zfs for pool#2 at the beginning, but then I would need more than 3 drives to get the capacity to hold pool#1 data.
Since, pool#2 is temporary just do a 3 x disk ZFS stripe.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525

HAL9000

Dabbler
Joined
Jan 17, 2013
Messages
19
Thought I borked the server last night after replacing the HighPoint I/O with a LSI card; did not think of backing up the data first (doh!). Ended up with many messages about mismatched geometry and something about corrupted files. Figured my only option was to start fresh with a new install of the latest release and rebuild the pools. Luckily, I tried to import the pool which got my system up and running after loading the saved configuration file. The zfs pool update went off without a hitch. The only problem was that the FreeNAS hung up after a shutdown...something about a missing file and going to the CLI. I am beginning to wonder if the USB stick has issues of its own and should be replace with a better brand.

Can I make a 4 disk zfs stripe if three drives are 3TB and one is 1TB? Figured an extra TB might be useful for the temporary pool. I promise to read the guides, but there are many times when the instructions for using the CLI or creating scripts are really confusing. The old MS-DOS was a walk in the park compared to trying to learn unix and this old dog is really trying to pick up some new tricks.:rolleyes:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can make a 4 disk ZFS stripe, but the vdev that you get will be limited by the size of the smallest component device; it'll be a 4TB vdev of 4 x 1TB vdevs. Probably not what you want.

You can make a 3 disk ZFS stripe of the 3TB's, a 9TB vdev. You can then just add the 1TB to the pool as well for 10TB total. Pretty sure that's what you're really looking for. Again beware that there's no redundancy while you do that.
 

HAL9000

Dabbler
Joined
Jan 17, 2013
Messages
19
understood jgreco. Capacity for the temporary pool is my main concern. With luck I will not need redundancy as it is only temporary. Will set for 10TB as you have described. Thx!
 

HAL9000

Dabbler
Joined
Jan 17, 2013
Messages
19
Well, almost 3 weeks to rebuild the NAS in its new server case, add in the extra HDDs, install a better power supply and copy the data over. I took it slow and tried not to rush anything.

System setup
MB: P8P67-M
RAM: 4X4GB DDR3 G.Skills Ripjaws (wish my local computer store had the 8GB sticks back when I built the first NAS with spare parts)
CPU: G530 Intel Celeron (should have gone with a CPU with greater processing bandwidth?)
HDD: 8x 2TB WD Green, 3x 3TB WD Red, 1x 4TB WD Green, 1x 1TB WD Black
HDD Controllers: LSI SAS 9207-8i (replaced the Highpoint 2720 because I got tired of hacking in the drivers with each update, figures that 8.3.0 has the drives now) and onboard SATA
USB: NZXT IU01 Internal USB Expansion Module
USB memory: Kingston DataTraveler Micro 8GB Black USB Flash Drive (installed on NZXT IU01)
Case: iStarUSA D-410-B10SA-BLUE Blue Zinc-Coated Steel 4U Rackmount Server Case (replaced the side fans with Corsair Air Series AF120 Quiet Edition (21dBa)
Power supply: Topower TOP-800WS Nano Series 800W 80PLUS Silver
UPS: CyberPower CP1500PFCLCD (works with CP1000PFCLCD NUT driver)

So what have I learned?

1) My original system had reached 75% capacity, so I figured it was time to rebuild and put in more drives. I used Rysnc to copy a few important folders over to the pool2 just in case the snapshot borked. I noticed the folders were recreated quickly, but not the larger backup files in the folders. There appreared to be no activity...what gives? Well, Rsync was busy copying the files in the .recycle folder! After locating every .recycle folder and deleting the contents, the used capacity dropped to 28%. I recall a posting for an older version of FreeNAS on how to create an auto delete script for the .recycle content. Will give this a go later.

Rysnc took at least 12hours to complete as some of the files were almost 300GB; left it to run overnight.

2) Then I followed jgreco's "scary" advice.

this command created a 3TB ZFS snapshot of pool1.

zfs snapshot pool1@copy

this command copied the snapshot to pool2 and named the snapshot file "pool1.zfssend"

zfs send pool1@copy > /mnt/pool2/pool1.zfssend

Next pool1 was detached; I made sure to uncheck the box about deleting shares (no point making more work since I wanted to use original shares) and only checked the box to delete data. Then use volume manager to rebuild pool1 with 8x 2TB drives.

Now the scary part, using zfs to put the file system back onto pool1.

Through trial and error I found the proper command

zfs receive -F pool1 < /mnt/pool2/pool1.zfssend

The -F is required to overwrite existing file system on the rebuilt pool1 with the files in pool1.zfssend. It took several hours to complete. Pool1 is now operational with added capacity and has its original file system restored. I tested the several files and everything apprears to be in working order.

Next steps, delete pool2 data once I am absolutely certain pool1 is A OK, figure out the auto delete script for .recycle and get help for why my HDD temperature report script will run from CLI, but not when automated as a CRON job (something about not having permission). But these are for another thread else where in this forum. - done. Amazing what a little reading and noticing the details does for success.

Hope this is helpful to someone else down the road.

Thanks again to everyone for the helpful comments and suggestions.
 
Status
Not open for further replies.
Top