Which RAID with 8-8-4-4 TB Drives

Status
Not open for further replies.

Jrod696

Explorer
Joined
Nov 20, 2017
Messages
52
I am still waiting on my Mobo to arrive in the mail but have been trying to do as much research as possible before I begin my FreeNAS journey.
Where I am confused the most is the storage setup, I have 2x8TB WD Reds and 2x4TB WD Reds.

My primary concern is data redundancy in case of a drive failure but I would like to get as much storage out of the drives as possible. Which Raid setup do I use to achieve this? Is there a tutorial I can read up on so when I go set things up I don't lose the 4 hairs I have left on my head by ripping them out?
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
2 sets of mirrors striped together is proably your best option. You could lose one drive in either set (vdev) and not lose all your data.

If you put all your disks in RAIDZ2, you could withstand the loss of any two disks. But your 8TB drives would effectively be reduced to 4 TB. If you were to replace the 4TB drives with 8TB ones in the future, you would reclaim the lost space.

Search the forum for "cyberjock's guide". Or read the fine manual. :smile:
 

Jrod696

Explorer
Joined
Nov 20, 2017
Messages
52
If I understand correctly, stripe the 8 and the 4 together to make 2 x12tb and then mirror the 2x12tb into 1x12tb?
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
The other way around: one 8TB mirror vdev and one 4TB mirror vdev, striped together into a 12TB volume.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The other way around: one 8TB mirror vdev and one 4TB mirror vdev, striped together into a 12TB volume.
Yep.

That bit me once. I was using Solaris ALOT, but with DiskSuite software mirroring, (Solaris 8 & 9). Then I get handed a Solaris 10 server build with 8 disks. Root pool took 2 disks, so the Application pool got 6 disks. The display for DiskSuite would show 3 disks striped, then mirrored to 3 more disks in a stripe, So I was not thinking much, and ended up with 2 vDevs, each a 3 way mirror. Ops. But it was an easy fix, I had not put any data on it yet.

So, ZFS = Make vDevs first, then it Stripes across vDevs.
 

Jrod696

Explorer
Joined
Nov 20, 2017
Messages
52
I got my hands on the manual and its not too clear on setting up vdevs and then striping them but I think I understand the logic at least, next step is to find and read cyberjock's guide. I am hoping once the mobo gets here and I get v11 installed and start using it, things will be a little more clear on what I am doing.
 
Last edited by a moderator:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Volume manager is your friend. Read up on that section. The wheel icon can be moved horizontally or vertically or both.

Since you have different sized disks, I'd start by creating your volume (pool) by mirroring just the 8TB drives. And, then extend your volume by adding a second mirror comprised of the 4TB drives.

Once you're done, go to the shell and do a zpool status. You should see your pool (volume) name, followed my mirror-0 with 2 drives listed under it. And mirror-1 with another 2 drives. While you can see the layout in the webGUI, zpool status is a helpful command to know.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I got my hands on the manual and its not too clear on setting up vdevs and then striping them
...
The striping is automatic with ZFS. If you have more than 1 vDev, it stripes writes across them. Reads of course, are read from which ever vDev has the data.

That said, their is a slight exception, which sort of applies in you case. ZFS will favor writing to the 8TB Mirror vDev, as long as it has more free space. Thus a recommendation to have equal sized vDevs, (but that's more for heavy duty servers).

You can use zpool list -v to see the vDev free and used space.
 

Jrod696

Explorer
Joined
Nov 20, 2017
Messages
52
So if I do the 2x8's in a vdev first and then transfer my data to them and then add the 2x4 vdev in after I get the data off will it erase the 8's when it stripes? Also if it does not format the 8's will it move data across or by the logic I get from Arwen it will just leave it where it is. To me since I am not worried about performance as much as I am dependability im not to worried about where it puts the data, in fact if I could cascade the data so it used all of the 8tb first and then moved to the 4tb I would be ok with that too.

Another question I cant seem to figure out and the manual kinda touches base on it is storage for Plugins, Jails, and VM's. I do know I want the drives to spin down when not in use (not sure if thats default or I have to set that up yet), I know for sure I will be running a MySQL server and probably a PiHole server as well. I know the MySQL can be done in a jail with freeBSD but the pihole I havnt quite figured out yet but I think that that will take a VM running Debian as far as I have been able to tell. To get the drives to spin down could I add a 32gb 3.0 USB Flash and use that for storage or what about the OS Flash drive, or I do have a new 250gb WD Blue I could add as well.

I guess what I am asking is where do the Jails and VM's get stored, what is best practice?
 
Last edited:

xCatalystx

Contributor
Joined
Dec 3, 2014
Messages
117
I guess what I am asking is where do the Jails and VM's get stored, what is best practice?
In 11.0 using the old-ui I know when you first go to setup jails it asks which datastore do you want to set as the root, so in this case, I would pick your pool that you want up 24/7.
TBH I tried doing this, but depending on the jails you will be running I would argue it's not worth it and just easier to make a more energy efficient system.

Things like sonarr, plex, etc also tend to keep drives awake if not configured very specifically.

I would also consider holding out for 11.1 GA before setting up plugins/jails. Will save you some time moving from the old system, you can just setup on the new iocage system. Someone else can confirm this.

pihole I havnt quite figured out yet
I did have it running in a jail at one stage but have since moved it to a dedicated light VM built on alpine linux. A VM running docker might also be a good alt.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
So if I do the 2x8's in a vdev first and then transfer my data to them and then add the 2x4 vdev in after I get the data off will it erase the 8's when it stripes? Also if it does not format the 8's will it move data across or by the logic I get from Arwen it will just leave it where it is. To me since I am not worried about performance as much as I am dependability im not to worried about where it puts the data, in fact if I could cascade the data so it used all of the 8tb first and then moved to the 4tb I would be ok with that too.
...
Whence data is written by ZFS, it does not normally move. Thus, if you have a single pair of disks in a mirror, and later add another pair, the original data stays where it is.

And no, adding additional disks does not erase the original disks.

At present, ZFS will not re-stripe data on the disks, when you add more vDevs / mirror pairs. Nor can you remove disks or vDevs.

ZFS is different, and does have both quirks, and limitations, (the disk / vDev remove is a highly desired feature that ZFS does not have).
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
(the disk / vDev remove is a highly desired feature that ZFS does not have)
...though single-disk vdev removal is supposed to be coming.
 

Jrod696

Explorer
Joined
Nov 20, 2017
Messages
52
So I got all of the parts in and FreeNAS installed, even manage to figure out how to get it to do everything I wanted, but another questions about this vdev, I only got 1 of the 8tb drives in the mail and still waiting on the other. I installed the one I have and it works but I noticed when I mounted it at the bottom it clearly said in big red writing, all data will be lost. Does that mean if I put data on it now and then get the second HD to mirror it will wipe all the data and I will have to start over? Also if that is the case and one of the Mirror Drives go bad and you put a replacement in does that mean you still have to start over?
 
Last edited by a moderator:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You have a lot of reading to do. There's still no way in the GUI to add a disk as a mirror of another (though there should be; it's a trivially easy operation in ZFS), so if you want to make a single-disk pool and then add another disk to mirror the first, adding the disk would need to be done at the CLI. Much better to create the pool mirrored to begin with.

As to the warning in the Volume Manager, it's a little misleading. All data on the disks being added to the pool will be lost. If you're extending an existing pool, the data already there won't be harmed.

And you never, never, NEVER use the Volume Manager to replace a failed/failing disk.
 

Jrod696

Explorer
Joined
Nov 20, 2017
Messages
52
Got it, so just be patient and transfer all my data over when both drives are installed and should the time come I have to replace a drive in the vdev, use the console and do my homework first...

I also agree I have a ton more reading to do, the problem I am running into is honestly the manual is kinda vague on some details so I have to go hunting down alternate sources.

Thank you everyone for helping me this far and answering what I am sure are all dumb questions.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
should the time come I have to replace a drive in the vdev, use the console and do my homework first...
No, you can replace disks through the GUI. Somebody should write a resource about that. What you can't do through the GUI is turn a one-disk vdev into a two-disk mirrored vdev.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Jrod696

Explorer
Joined
Nov 20, 2017
Messages
52
Thanks for the follow up link, looks doable if you are in a situation where the data is already there, Since in the grand scheme of things I am going to be moving a little over 9TB I can wait the extra 2 days to set it up on the first go around just in case. Im still a newb and I can see me being pretty irritated when I have to do another 7 hour data transfer to the new NAS because I missed a something or fat fingered a drive id
 

Jrod696

Explorer
Joined
Nov 20, 2017
Messages
52
Ok so I am still waiting on my last 8tb drive but have been reading through forums and manuals and found another option, Could I not leave the the two 8 TB drives the way they are, Vdev the 4tb drives and then setup a RaidZ1? Wouldn't that give me 16TB instead of 12 TB and still have redundancy on my data and a small performance boost, or is this a horrible idea to get an extra 4 TB out of my setup.
 
Status
Not open for further replies.
Top