Adding storage to existing hardware

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
Hello everybody and happy Mothers day,

I run a HP Microserver (main) with 4x 3,64 TiB. My replication HP Microserver (backup) has 4x 1.82 TiB and I extended it with 4x 1.82 TiB so now a total of 8 disks.

What would be the best layout to add the new disks to?

I have plenty of free space on main so backup does not have 100% capacity of main so should I choose redundancy to be on the safe side because these 8 disks are old they may fail sooner than new?

My question what to choose from (Mirror / Stripe / Log (ZIL) / Cache / Spare) sounds naive but because the existing 4 disks contain real data I do not want to experiment for not to make a mistake. Many thanks!
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey vafk,

There are many elements missing in your post, so we can not give you a highly customized answer...

One way to increase storage space is auto-expand : By replacing every drive in the pool with a bigger one, ZFS will auto-expand the pool once the last drive is put in place. According to your setup, you may be able to do auto-expand without degrading your pool temporary or not.

To add storage in an existing pool by adding drives is possible. You create a new vDev and add that vDev in the pool. It is possible to mix-N-match vDevs types in a single pool but this is not recommended. Ex : if your actual pool is RaidZ2, you can add a mirror vDev to it. Usually, you try to keep your vDevs the same in the same pool.

As for which kind of vDev would fit you, we need to know how you will use it. RaidZ2 is not good at IOPS, so not a good performer for something like iSCSI. Raid10 will do better for that. As opposed, for long sequential read, RaidZ2 will outperform Raid10...

Usually, adding RAM speeds up the server more than caching drives, either for Read or Write requests. If you need more performance, add RAM before anything else. Be sure not to shoot yourself in the foot by turning On deduplication and avoid running too many plugins / jails on the server. Also, be aware of how resource intensive yours are because some plugins will require more resources than others.

But until we have way more details about your setup and needs, we can not give your more precise recommendations,
 

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
Heracles,

sorry for having sent incomplete information.

zpool status reports raidz1-0 (with four disks 2 TB)

The backup server has 16 GB RAM and is only intended to backup data from main server, so performance is not necessary at all.

For me ideal would be to extend the current volume with the additional 4 disks giving space and redundancy in case one drive fails so rebuilding the volume in case one drive fails will not take so long as without a spare drive.

Thank you
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi again,

The backup server has 16 GB RAM and is only intended to backup data from main server, so performance is not necessary at all.

So no point looking at any kind of caching, either Read or Write...

zpool status reports raidz1-0 (with four disks 2 TB)

That's a No-Go... RaidZ1 does not provide you with appropriate protection and will betray you down the road. Unfortunately, now that this vDev is done, you will need to empty it and destroy it before being able to undo it and re-design it.

For me ideal would be to extend the current volume with the additional 4 disks giving space and redundancy

Unfortunately, that is not possible. A pool will fail the moment any of its vDev fails. No matter what kind of vDev you will add to that pool, your pool will never be any stronger than this actual RaidZ1 vDev.

An option could be to add 2 big enough drives in a mirror as a new pool. You migrate your data to that new mirror and empty your shaky RaidZ1. Once empty, you destroy that RaidZ1 and re-design your new pool with new drives and a new redundancy strategy. Because it is a backup, you may also rely on the original data in the main server for a moment. That means you destroy your backup server and rebuild it. Once done, you re-sync your original data with your new backup server.

A chain is only as strong as its weakest link and that RaidZ1 is a weak link in your pool. You may add the strongest link you wish in your pool, as long as this weak link stays, your pool is not any stronger than that one. Technically, I think it is possible to completely mirror that vDev and that would add some strength to it, but at a maximum cost in drives. You would have the usable space of 3 drives out of 8 drives...

Good luck hardening that pool,
 

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
Many thanks. I am slowly getting there... So because this is a backup server I could take the risk redesigning the ZFS pool without saving the data (because that is on main server and as soon the new pool is created the two servers will sync - hopefully). How do you suggest to configure the 8x 2 TB drives and 16 GB RAM?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi,

because that is on main server and as soon the new pool is created the two servers will sync - hopefully

It will not happen auto-magically... You will need to re-configure your replication / backup for the main server to point to the proper place in the new pool.

Because you are doing cold storage only, I would go RaidZ2. That will give you the usable space of 6 drives, so 12 TB.

I guess your main server is also RaidZ1 ? If it is, you have 3x 3.64 TB of storage on it. The 12 TB will be able to host everything from the main server.

Should you wish to increase your redundancy, you could go RaidZ3. Personally, I would rather go RaidZ2 with 100% capacity compared to the main server, but RaidZ3 would increase even more the redundancy you mentioned as being so important while offering almost as much space as the main server.

To backup your data on a second server is surely a very important thing, but remember that if both servers are in the same place, a single physical incident like fire will destroy both of them at once.

Good luck designing your new backup solution server,
 

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
It is me again. I tried to detach the volume nas2. First I recieved an error (not specified) and after rebooting detach reported OK. But after creating a new volume I receive

[MiddlewareError: Unable to GPT format the disk "da0": gpart: geom 'da0': File exists]

If I repeat creating new volume I receive

[MiddlewareError: Failed to detach nas2 with "zpool export nas2" (exited with 1): cannot export 'nas2': pool is busy]

After rebooting, my old volume is back. Any ideas?
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Before you continue, I'd like to ask how you connected 8 drives to your N54L.
 

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
Simple solution. Dell perc card in first N54L connects cage of additional four drives in second N54L housing.
 

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
I narrowed the problem. After deleting all but one dataset, within my "nas2" pool, dataset "Backups" is not possible to delete.

Commands like "zfs export -f nas2" and " zpool destroy -f nas2" report "busy".

After that I tried (don't ask me why I got that idea) to create a dataset "Test" with the dataset "Backups" and voila, both "nas2" and "Backups" disappeared from the GUI-View. I rebooted FreeNAS and nothing was there. All 8 Drives available for new creation!!!

I created "nas2" again and - damn - the "Bacckups" dataset is back again and cannot be deleted. Am I getting crazy or is FreeNAS crazy???
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My replication HP Microserver (backup) has 4x 1.82 TiB and I extended it with 4x 1.82 TiB so now a total of 8 disks.
By what means did you connect additional disks to a micro server? There are some technologies that are not very compatible with FreeNAS.
 

vafk

Contributor
Joined
Jun 22, 2017
Messages
132
Do you think the Dell network card on which the 8 disks are connected could cause the fault for the existing volume "nas2" being "busy" and the dataset "Backups" beeing undeletable on drives 0-3 while the new & empty disks 4-7 which have just been attached and not even beeing added to the existing/undeletable pool?

BTW what I discovered by now: For some reasons I can delete the volume "nas2" while same time impossible to delete the dataset "Backups" because of "exists" error.

If (and I am not joking) I name the new volume anything besides "nas2" (which it was named before), I get an empty volume and can create any type of datasets, also "Backups" and delete them.

IF I name the volume "nas2" (because I like to keep my current sync jobs from server to backup working) dataset "Backups" is back no matter what configuration of disks I use (RaidZ2 or RaidZ3). To delete it is impossible. This happens even after I created five different types of volumes and deleted them. To get rid of the problem I needed name the volume a different name (currently "backup". Then and only then, the dataset "Backups" is not there. Now I am syncing my server "nas1" with the new "backup". I will see what it does and how it behaves when it has some data on it.
 
Last edited:
Top