Advice on first time storage setup/design

Status
Not open for further replies.

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Hi,

Building my first FreeNAS system, and am hoping for a bit of advice on how to architect the storage for my scenario.

I currently have an LSI hardware RAID controller running several volumes spread over the following disks;
4x 2TB WD
4x 3TB WD
4x 4TB WD
6x 2TB Hitachi

Some of these disks are available to connect straight into FreeNAS, with the others becoming available as I move data from one setup to the other.
I spread my data out onto these different arrays based on what they are; media, applications, VMs etc.

My understanding with ZFS and FreeNAS is, I can only have 1 volume (pool), that will contain multiple vdevs. If a vdev fails, the entire volume fails.
My questions come down to;
  • If I wanted to use RAID 10 equivalent layouts for each of the sets of disks (e.g. 4 disks in RAID 10), could I have several RAID 10 vdevs in the volume?
  • How would my data be spread out amongst these vdevs? Would I be able to create shares within the vdev specifically or does it just get assign to somewhere on the volume?
  • Is it correct, that once I create a 4 disk RAID 10 vdev, I cannot add another mirror to the vdev, so the only way to increase the storage is to replace all the disks in the vdev?
  • Once a vdev is added to a pool, it can never be removed?
  • Can you have multiple pools instead, each with its own vdev?
  • Is the SLOG assigned to the volume or to the vdev? If to the vdev, can a SLOG drive be assigned to multiple vdevs or does it need to be partitioned and each partition is assigned to a vdev?
I basically want to try and architect my storage so it has clearly defined purposes for each set of disks I am using, and is as flexible as possible.

Thanks in advance for you advice.
Eds
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I can only have 1 volume (pool)
No, you can have as many pools as you like. Normally it makes the most sense to have just one pool though.
once I create a 4 disk RAID 10 vdev
You wouldn't create a four-disk RAID10 vdev, you'd create two vdevs, each consisting of two mirrored disks. Those two vdevs would be striped together (as are all vdevs in a pool).
Would I be able to create shares within the vdev specifically or does it just get assign to somewhere on the volume?
Shares/datasets belong to a pool, not to a vdev.
Once a vdev is added to a pool, it can never be removed
Correct.
Is the SLOG assigned to the volume or to the vdev?
To the pool.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
Ok great, that all makes sense.

Can you clarify why the recommendation would be to have 1 pool containing all vdevs, when it seems to introduce some concern with one vdev failing and knocking out the entire pool?
Are there any reasons to avoid having multiple volumes as that is what I am leaning towards doing, so it behaves in a similar fashion to my current hardware RAID setup.

If I wanted to use one drive as a SLOG for each of my volumes, would I simply need to partition the disk first and assign 1 partition to each volume?

Thanks for your help.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
A large part of the purpose of ZFS is to provide pooled storage, which can be very convenient for many reasons.
If I wanted to use one drive as a SLOG for each of my volumes
Don't.
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
A four disk Raid10 is equivalent to two mirrored pairs. Correct? So you could create a pool with two mirrored vdev's that would be functionally equivalent to Raid10. Of course, you can also create a pool with 4 disks in RaidZ2. If the disks are all the same size, you will end up with the same capacity either way.

So what is the difference? With mirrored vdevs, you can survive the loss of two disks as long as it's one disk in each vdev - if you lose two disks in the same vdev then your data is gone. With RaidZ2, you can survive the loss of any two disks.

Which is better? Some people think mirrored vdevs are better. They will give slightly better performance and resilvering runs more quickly when replacing a failed disk. Others prefer the 4 disk RaidZ2 because it is "safer" in the sense that any two disks can fail without losing data.
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
A large part of the purpose of ZFS is to provide pooled storage, which can be very convenient for many reasons.

Don't.

Thanks Dan.

Can you give an example of when this would be more convenient than separate pools? Again, my concern is the ability to rework the storage setup down the line if required. If I cannot remove a vdev from a pool, or decide I want a different raid type on a vdev, then I would have to destroy the entire pool and start over?

Also, what is your reasoning for not using one drive as a slog for multiple volumes? My IO usage is going to be really quite low on this box, so the SSD I am using has plenty of performance to act as a slog for multiple volumes. Is there a reason other than performance?

Thanks pschatz.

Perhaps I will use a mix of the two, depending on what data will be stored on each. I like the higher performance for VMs, and like the extra safety for media.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Can you give an example of when this would be more convenient than separate pools?
The thing that I've had issues with in the past has been free space fragmentation--I'll have some space free on one drive, some on another, but no one drive has enough space free for what I need to do at the moment. I then need to either shuffle stuff around, or break up what I'm trying to do. Not a problem with ZFS.

My general recommendation is a single pool unless you have some significantly different requirements for your storage, which may well come into play here.
If I cannot remove a vdev from a pool, or decide I want a different raid type on a vdev, then I would have to destroy the entire pool and start over?
Yes, that's true. ZFS requires a bit more planning up front than some other systems.
Also, what is your reasoning for not using one drive as a slog for multiple volumes? My IO usage is going to be really quite low on this box
If your IO is going to be quite low, you probably don't need a SLOG in any event. What makes you think one will be called for?
 

Eds89

Contributor
Joined
Sep 16, 2017
Messages
122
The thing that I've had issues with in the past has been free space fragmentation--I'll have some space free on one drive, some on another, but no one drive has enough space free for what I need to do at the moment. I then need to either shuffle stuff around, or break up what I'm trying to do. Not a problem with ZFS.

Noted. This is kind of why I like the idea of separate pools, so I have a more distinct segregation of data on pools specifically dedicated for that data type (one for VMs, one for media etc.). I know then that if I am running out of space on that volume, I only have to deal with that volume and don't have to think about anything else.

Yes, that's true. ZFS requires a bit more planning up front than some other systems.

Had I have been deploying this in a work environment where I have a bit more capital behind me, I would have no issue with doing it in this way. As it is for home use, and I want to try and keep costs down (Don't really want to do an entire rebuild again, and fork out for a whole set of new disks to move everything onto, temporarily for the rebuild or otherwise), my preference is to keep the deployment as flexible as possible, so I can remove/redo parts of it at a time.

If your IO is going to be quite low, you probably don't need a SLOG in any event. What makes you think one will be called for?

My understanding was, it is recommended for VM storage volumes that are being used on iSCSI or NFS. This means I can keep sync writes on and as the SSD has PLP, it gives the best performance whilst allows helping to ensure data consistency for the VM?
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
I think using mirrors for VM's and RAIDZ2 for storage makes good sense. Don't forget that you while you cannot change the configuration of a vdev, you can increase the capacity by changing out the drives for larger ones.

I would not be concerned about a SLOG right now. Get your system up and running, and stable. You can always add a SLOG later (if you need it.)
 
Status
Not open for further replies.
Top