Expansion Question

Status
Not open for further replies.

we7313

Dabbler
Joined
Nov 17, 2015
Messages
40
I'm in need of expanding my existing Freenas machine, so I bought a Promise Vtrak 24 drive bay off of Ebay.
Point of reference on the Vtrak:
https://www.ebay.com/itm/Promise-VT...e=STRK:MEBIDX:IT&_trksid=p2060353.m2749.l2649

In my current Freenas server I have a Zpool of 6 8tb disks that hold my movie collection.
I'm looking to expand with the Vtrak.

1st Question:
Should I add a new Vdev to the existing Zpool (of the 6 8 tb disks internal on server) using the Vtrak?
Or is it a bad idea to mix external drives with internal drives - will this slow down the entire volume?

2nd Question:
Vdev masters - I have 24 bays to fill.
What is the wisest way to start to populate this monster?
It would be nice if I could buy 4 new drives whenever a sale hits and just add a new vdev(raidz1) extending the existing volume each time.
This would give me a life of 6 upgrades (4 drives x 6 purchase points)

Primary use storing media/movies (large contiguous files).
This is a home use server with primary use being plex serving family movies to multiple clients.

Thanks in advance!
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Should I add a new Vdev to the existing Zpool (of the 6 8 tb disks internal on server) using the Vtrak?
Yes. No point in having a separate pool unless it is being used for an entirely different purpose like having another iSCSI pool etc.
What is the wisest way to start to populate this monster?
It would be nice if I could buy 4 new drives whenever a sale hits and just add a new vdev(raidz1) extending the existing volume each time.
RAIDZ1 is almost certainly a bad idea when the disk size gets 4TB+ And the best TB/$ these days occur on 4TBs or 6TBs. You should be using RAIDZ2. With RAIDZ2, however, 4-drive vdev is not the optimal number of drives because you lose 50% space. You should go with 6 or 8 drive vdevs for best performance. Anything above is not good either. Having more vdevs within a pool also gives you more IOPS = better read speeds -- although for the use case you have, it won't matter significantly but you would still get slight improvement.
6 drive vdevs will give you 4 upgrades
8 drive vdevs will give you 3 upgrades at the cost of buying 2 additional drives for that particular upgrade.

After all that, it seems you are using RAIDZ1 -- I'd say it's time to destroy the pool and re-create it with RAIDZ2 and put the data back on it from the backup. After that, add 6 or 8 drive vdevs on the Promise chassis and upgrade away !!
 

southwow

Contributor
Joined
Jan 18, 2018
Messages
114
@we7313 , I'm in the midst of this right now with a large collection of movies that we ran out of physical shelf space for (literally 30TB).

I started out with a single 3-drive vdev over a decade ago and have constantly added them until I filled a 15-bay case. Then, I started upgrading drives... Then, bought a larger case... then, I was completely stuck as drives began to fail everywhere (sometimes overlapping). You need to have multiple drives of redundancy. It's a massive pain, but RAIDZ2 is the safest solution currently.

I'm currently backing up the entire collection to backblaze personal before destroying the pool, and I have also dumped off most of the mkv files to a smaller branded NAS so that I have a backup on-hand to transfer back without having to re-download the large files. Just make sure you have a backup, and bite the bullet to buy the appropriate drives... that's my only advice. When you get to this level and have data you don't want to lose, enterprise-class hardware and regular backups are the only solution.

I can't imagine having to rip each disc again at this point as it took 3 years of weekends and holidays to get the first 1000 or so backed up, boxed up, and moved to storage locker. That's a LOT of time I'll never get back!
 

we7313

Dabbler
Joined
Nov 17, 2015
Messages
40
Let me ask this:
Is there a way for me to use each drive as an independant unit, but expose them as one logical drive to all clients?
Rational:
They are all just movies that I have collected. The movies can be replaced, but I don't want to have to replace them all (entire zpool).
I do have the tolerance to replace 1 drives worth of lost movies.
This would allow me to use all available disk space & just loose what was on that individual drive.

Let me know your thoughts.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
You can have single drive vdevs, but the second you lose 1 vdev, you lose the entire pool that the vdev is in because in this scenario you don't have any redundancy. So it's not going to achieve what you want.
 

we7313

Dabbler
Joined
Nov 17, 2015
Messages
40
Right, that is not what I am looking for.
I'm looking for a way for Freenas to expose a bunch of drives as one logical unit.
So basically 20 drives got be exposed as one share and all appear to be on 1 drive.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Right, that is not what I am looking for.
I'm looking for a way for Freenas to expose a bunch of drives as one logical unit.
So basically 20 drives got be exposed as one share and all appear to be on 1 drive.
No.
The most you can do is have each drive in a separate pool and mount all the pools under the same parent folder on the client. You will still have 20 separate folders under that parent folder.

I should mention that this is wrong on so many different levels.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Right, that is not what I am looking for.
I'm looking for a way for Freenas to expose a bunch of drives as one logical unit.
So basically 20 drives got be exposed as one share and all appear to be on 1 drive.
You talking about file level disk concatenation this is slow and clumsy as your limited to the speed of a single disk and if you have 10GB free but on 1GB across 10 disks, you can't save your file. I think early versions of windows offered this and still does in a less limited way. ZFS always strips across vdevs. It's faster and if you have have any redundancy in your vdevs its way safer too.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
No.
The most you can do is have each drive in a separate pool and mount all the pools under the same parent folder on the client. You will still have 20 separate folders under that parent folder.

I should mention that this is wrong on so many different levels.
Kinda like using a hammer as a screwdriver.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@we7313 looks like FreeNAS may not be the best option for your storage goal. ZFS' strength is data integrity and managability, not extreme flexibility. Linux on the other hand does have a Union file system which seems to do what you want.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
@we7313 looks like FreeNAS may not be the best option for your storage goal. ZFS' strength is data integrity and managability, not extreme flexibility. Linux on the other hand does have a Union file system which seems to do what you want.
I just threw up in my mouth. :cool:
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@we7313 looks like FreeNAS may not be the best option for your storage goal. ZFS' strength is data integrity and managability, not extreme flexibility. Linux on the other hand does have a Union file system which seems to do what you want.
I just threw up in my mouth. :cool:
I would rather steer someone away from FreeNAS, than have them un-happy with their NAS.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I would rather steer someone away from FreeNAS, than have them un-happy with their NAS.
I understand that. I just hate when people make poor decisions with their data and eventual lose it.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I understand that. I just hate when people make poor decisions with their data and eventual lose it.
Yes, but that person said they could afford to loose a disk and restore it. Not our call.

Plus, if we show that ZFS, (and FreeNAS by extension), is not the best choice for this use case, when they do loose data, that other NAS vendor or support group will have to take any complaints. A bit rude, but basically correct.

ZFS used properly won't loose data easily. But, a striped pool IS NOT using ZFS properly in most cases. (That said, I've used ZFS striped pools in major production servers from SAN LUNs from EMCs. That's a whole different case, daily backups, reliable LUNs managed by someone else. Not to mention real hardware monitoring.)
 

jbernie51

Cadet
Joined
Jul 3, 2018
Messages
3
@we7313, not to hijack your thread, but I've got very similar question. However, I really need to start with the basics as I'm still trying to figure out how FreeNAS and ZFS work to begin with. I'm also more visual, so please pardon the graphics I'm including which I created to try and understand things.

From what I understand:
  1. Each physical drive is considered a vdev (let's call it P-vdev).
  2. We then can take a few P-vdevs and place them in a logical vdev (L-vdev) for mirror/stripe/RAID functionality (disk layout).
  3. A Volume (or zpool) can then be made in various configurations of one or more L-vdevs (or even a single P-vdev, skipping the L-vdev level).
So far so good..?

Here is where I start getting fuzzy: On top/bottom of the Volume we create Datasets.
  1. I say top, because that is what gets exposed to external systems. Since there can be multiple Datasets on a Volume, I see them as the upper-part of the hourglass...
  2. It looks bottom simply because the hierarchy tree is nesting below. (I guess if you just flip the hourglass over things would be correct either way).
Someone please tell me which is the correct term so that I don't go bumbling around...

So, I would like to start my build with 4 x 8TB drives in a RAIDZ1, exposing it as a single 24TB 'Drive' for my network (see Initial Creation). Not using RAIDZ2 as the truly important files are already on a 2TB RAID0 with two cloud backup services monitoring them. This would be used for a media collection; data gets completely fried, oh well. But I'm still using some level of safety because its just prudent.

One cannot perform a Windows span-like function for one Dataset across multiple Volumes. Not the best practice, even in Windows, IMHO.
However, adding a second L-vdev to the Volume (original question) is possible, and best to have it as similar to the existing L-vdev in drive speed, size and layout.
Questions:
  • Does the existing data in the pool get redistributed to the new L-vdev? If yes, is the Volume in a degraded state until its done? If not, how is the new data distributed?
  • Based on my image below, the First- and Second- Expansions are what the OP was wanting to do. .?
  • How about after doing a First Expansion and then wanting a mirror for speed/redundancy? Would I create a mirror of each L-vdev and have a Volume with two mirrors?

upload_2018-7-5_0-59-10.png


upload_2018-7-5_0-59-54.png
 

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
Actually not much of that is correct, you need to read up on the resources in this forum. @Chris Moore has some awesome links in his signature.

Also, you shouldn’t hijack unrelated threads. This should have been its own.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
  • Each physical drive is considered a vdev (let's call it P-vdev).
  • We then can take a few P-vdevs and place them in a logical vdev (L-vdev) for mirror/stripe/RAID functionality (disk layout).
  • A Volume (or zpool) can then be made in various configurations of one or more L-vdevs (or even a single P-vdev, skipping the L-vdev level).

1. No, a drive is a drive, a vdev is a vdev.

2. No, you take drives you put in vdevs and you take vdev you put in pools.

3. Yes, but again there's only vdevs (no P or L vdevs) so only one level.

  • Does the existing data in the pool get redistributed to the new L-vdev? If yes, is the Volume in a degraded state until its done? If not, how is the new data distributed?
  • Based on my image below, the First- and Second- Expansions are what the OP was wanting to do. .?
  • How about after doing a First Expansion and then wanting a mirror for speed/redundancy? Would I create a mirror of each L-vdev and have a Volume with two mirrors?

1. No but new written data will. The more a vdev is empty the more data will be written to it. For example if you have a a vdev with 20 % space left and another with 90 % then you'll have far more data written to the second vdev until everything is more or less balanced.

2. Yes.

3. No. You can't change vdev layout (not for now at least). No, you can't mirror vdevs.


As others already posted please read the links from the signatures (especially https://forums.freenas.org/index.php?threads/comprehensive-diagram-of-the-zfs-structure.38865/ and "Terminology and Abbreviations Primer" and "Slideshow explaining VDev, zpool, ZIL and L2ARC"), it'll make things far more clear.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
not to hijack your thread, but I've got very similar question. However, I really need to start with the basics as I'm still trying to figure out how FreeNAS and ZFS work to begin with. I'm also more visual, so please pardon the graphics I'm including which I created to try and understand things.
How about you don't make up your own new terminology. Read the existing documentation and learn first:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/
 
Status
Not open for further replies.
Top