Running out of space... Suggestions?

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
Hi guys it's been a while since I've been active on this board. I built my server over 7 years ago and now I'm finally running out of space. I've throughout the years upgraded my hard drives from 6TB to 8TB to now 10TB. I'm at 92% free space and need to know what's my best option to grow this server. My hardware and settings are in my signature below.

Thanks
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
The easiest solution is probably to continue replacing the drives with bigger ones, though that might be a bit inefficient.
The better solution should be to add another vdev, but this requires the phisical space to do so as well as enough data (SATA/SAS) ports and an adeguate PSU. Changing the pool layout would be quite a feat considering the amount of data you hoard.
Another solution could be to assemble another machine to use for the storage of the data you access the least, freeing space on you existing pool.
I honestly don't think there are other ways that don't imply deleting data, maybe check to have the compression protocols enabled.

Edit: from what I see, you should have the space for just another 6 drives so you could add another vdev: this should be the easiest and best solution for you right now. Also with that amount of space you could consider increasing your RAM.
 
Last edited:

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
Since its inception, I've used and built this server to host all my media. 90-95% of the time it's used to fling my media through Plex.
The rest of the time (5-10%) is used to archive data and backup to/from my other machines.

My 12U rack right now has 3u available. Going the upgrade route of upgrading the drives now to 12TB or 14TB it's an option but truthfully it's very expensive and the only reason I was doing it before (6TB -> 8TB -> to now 10TB), it's because I'd never had the time to really focus on the server and I'll just resilver each disk daily (12disk) for 12 days and I'll have the disk space I need for the next 2-3 yrs. But now with mostly having 4K content, I'm filling up space at a quicker pace. I was going from upgrading from 2-3 yrs to now like every yr. Now I'm taking time next month in the holidays to focus on this.

I definitely know I have to upgrade the ram, I'd just been lazy about moving all the things out of the rack so I can access the inside. The drives I can just hot-swap from the front. It's a 16-bay drive (3U Supermicro 16 bays Storage Server Chassis). I only have 2 vdevs at 6 drives each as 1 of the ports doesn't work, so instead of 8 in each, I went with 6 in each.

I don't want to go the new server route as like I said, most of the data is for my Plex server. I don't want to run 2 plex servers as it'll make the user experience less when watching media.

If I recalled when building this machine back then, the reason I went with the On-board LSI 3008, was because I can connect this to a (blaze backplane), not sure if that's the name. If I understand correctly, that's just a server (nothing inside other than a 2-3U where I can just install hard drives) and it's an extension of my main server now. Is this the best option as I can still use my 2 vdev 10TB drives, and I can buy an additional 6 12TB drives to make another vdev? I'm I understanding this correctly?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Is this the best option as I can still use my 2 vdev 10TB drives, and I can buy an additional 6 12TB drives to make another vdev? I'm I understanding this correctly?
Yes you are, proven that you can connect those drives to your motherboard in a proper way.
Also since we are here do note that booting from USB flash devices, especially cheap ones, is discouraged.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Going the upgrade route of upgrading the drives now to 12TB or 14TB it's an option but truthfully it's very expensive and the only reason I was doing it before (6TB -> 8TB -> to now 10TB), it's because I'd never had the time to really focus on the server and I'll just resilver each disk daily (12disk) for 12 days and I'll have the disk space I need for the next 2-3 yrs.
Note that the sweet spot in $/TB right now is around the 18 TB mark, not 12-14.
You need not replace all 12 drives, but only the 6 in one vdev to benefit from increased space. Or you can add a new vdev, whose drives need not be of the same size as in the old vdevs.

For the sake of safety, do note that "ZFS is not a backup" so you'd need an external backup of all this—or at least of the most critical archive data.
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I would rephrase the problem. In my view the core question is not about how to "grow" the server (i.e. looking at the update approach). Instead the target architecture is the first thing to be clear about. E.g. do you want to stick with 2 RAIDZ2 vdevs, or go for a single RAIDZ3? From a price per net-capacity I guess that mirrors are not an option - but who knows.

For me that is the question that needs an answer first. Then we can talk about how to get there ...

Good luck!
 

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
I will put that on my update list as well to move from a USB drive to a Solid state drive. (Recommendation)
Also adding more memory as well. (Any recommendations on sites or places to buy reasonably)

Thanks. Today I'm going to Best buy and picking up 6 of those 18TB or 20TB drives

Any real reason that I should move my current 2 RaidZ2 to 1 RaidZ3? What's the benefit other than an extra parity drive?


Are any of these solutions to add a backplane to my current server setup? Or something else you see. I prefer a 2U as it'll fit well in my 12U rack.
(https://www.ebay.com/itm/374358720478?hash=item572984bfde)
(https://www.ebay.com/itm/373813203819?epid=1529572029)
My goal is to add an additional 24 drives to my current setup. At the moment I want to get a Backplane and add 6 18TB or 20TB drives.
Then in the future add another 6 20TB and so on. Just want to have the space to grow.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
The single Z3 VDEV will use one less drive for parity than a dual Z2 VDEV. So it's more efficient from a parity loss POV, but your IOPS will also be cut in half. Comes down to use case - I opted for a single Z3 because for my use case a single data VDEV with a special VDEV supplementing it was fast enough and allowed me to use a simple consumer-grade case vs. trying to make a SM836 fit in my home.

I think your current plan to replace all the drives in one of your VDEVs with 18TB drives one by one is the right way to go. Only consider upgrading to more drives as "sweet-spots" inevitably move up dramatically (in your case almost 2x). But given your growth trajectory, a SM836 or like pro case probably makes sense, and on a used basis is pretty darn inexpensive, even if you have to replace old PSUs with new Titanium ones.
 

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
I'm in the process of resilvering vdev 1(RaidZ2) with 18TB drives.
I'm noticing they are resilvering much faster than my 8 and 10TB from before.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Any real reason that I should move my current 2 RaidZ2 to 1 RaidZ3?
I don't see any reason to.

I also wouldn't go to a 12 wide vdev as you can incur performance issues as the pool fills. Also with those large drives it would take an eternity to scrub the pool.

I'd stick with your current plan of replacing the drives with larger ones. It's the most cost effective way to grow your pool until you're ready to re configure your server to hold more drives. Just remember if you add more drives it means more cost to maintain and replace those drives as they age out over time. And once they are added to your pool they can't be removed so you are stuck with that configuration, and the cost to maintain it, unless you destroy and recreate your pool.

That's just my $.02
 

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
I don't see any reason to.

I also wouldn't go to a 12 wide vdev as you can incur performance issues as the pool fills. Also with those large drives it would take an eternity to scrub the pool.

I'd stick with your current plan of replacing the drives with larger ones. It's the most cost effective way to grow your pool until you're ready to re configure your server to hold more drives. Just remember if you add more drives it means more cost to maintain and replace those drives as they age out over time. And once they are added to your pool they can't be removed so you are stuck with that configuration, and the cost to maintain it, unless you destroy and recreate your pool.

That's just my $.02
I didn't think about that. I guess I'll hold up on the new JBOD and upgrade my drives for now. I'm on my 2nd resilver now. 4 more to go to complete the first vdev.

Once I'm done, I'll be adding 256GB ECC of memory from 64GB ECC. I hope this will be enough for when I'm done (2 vdev 18TB drives each for a total of 12 drives). At the moment I'm using 67TB with 5TB free. So I have a total of 72TB of usable space and the server is running fine.

Is crazy to think that I have 12 10TB = 120TB and I can only use 72TB
So on this same delta, once I'm done upgrading I should have 12 18TB = 216TB and I'll probably be able to only use 129.6TB

So an additional gain of 57TB is costing me
18TB = $300 x 12 = $3,600
256GB ECC DDR4 = $650
Total of $4,250
($75/per usable TB space)

Uhhh..........
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
I didn't think about that. I guess I'll hold up on the new JBOD and upgrade my drives for now. I'm on my 2nd resilver now. 4 more to go to complete the first vdev.

Once I'm done, I'll be adding 256GB ECC of memory from 64GB ECC. I hope this will be enough for when I'm done (2 vdev 18TB drives each for a total of 12 drives). At the moment I'm using 67TB with 5TB free. So I have a total of 72TB of usable space and the server is running fine.

Is crazy to think that I have 12 10TB = 120TB and I can only use 72TB
So on this same delta, once I'm done upgrading I should have 12 18TB = 216TB and I'll probably be able to only use 129.6TB

So an additional gain of 57TB is costing me
18TB = $300 x 12 = $3,600
256GB ECC DDR4 = $650
Total of $4,250
($75/per usable TB space)

Uhhh..........
Went through the same process last week, just waiting for the last two disks to arrive. What was your line of thought regarding the RAM? In a SOHO situation, I thought up to now that even 32GB should be plenty, regardless of disk space.
 

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
Went through the same process last week, just waiting for the last two disks to arrive. What was your line of thought regarding the RAM? In a SOHO situation, I thought up to now that even 32GB should be plenty, regardless of disk space.
My thought on RAIDZ (ZFS) was always to use 1 GB of ECC Ram for every 1TB of storage. It also depends on your datasets, which in my case is just 2 (data and media). I'm really not accessing the server for different data at a time and not more than 2 users are accessing the server at a given time. I can be wrong, but that's what I understood when building my server back in 2014. In all my research, that was the base consensus at the time. Maybe things have changed as time went by with people with larger pools have gotten to prove that theory wrong. Maybe something changed in the code as well. I hope someone with a lot more knowledge can chime in with their expertise as I really don't want to shell out more money for additional ram.
 

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
My experience, starting with a Solaris system and zfs back around 2009, was that even 2GB (the maximum the puny Atom CPU could manage) was enough in a home environment with the system only freezing once in two years. The system had 4x 2TB iirc. A server I have built recently has 10x 10 TB disks and 32 GB RAM. No issues so far.

An 8GB RAM system like my micro server HP N40L runs for years now with 8 GB RAM and 5x 3 TB disks. I feel comfortable with the system having half of the recommended RAM if you are in a SOHO environment.

I think that the recommendation comes from business environments where hundreds of users have the workload to match, and need a fast, responsive machine as well. Maybe others can chime in to confirm or deny this theory. If the latter, I would be interested to hear about concrete examples.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
My thought on RAIDZ (ZFS) was always to use 1 GB of ECC Ram for every 1TB of storage
That's always been a guideline, not a rule. Workload and current RAM usage is the deciding factor on total RAM needs.
 

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
I hope I'm looking in the right place.
My ZFS Cache has consistently averaged around 5 GiB and Services around 8.5 GiB. My Free averages around 50 GiB
My longterm System Load has a max of .22. I notice it goes to .58 when streaming a video on Plex.
 
Last edited:

emk2203

Guru
Joined
Nov 11, 2012
Messages
573
These sound like you will be in the clear with your current memory for a long, long time. Your load won't be influenced by memory changes as long as you have some of it free as reserve.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Generally cache grows with time.
Memory management is different between CORE and SCALE: for starters in the latter you can't use more than half your total RAM for caching.
If I am not wrong there are a few tricks (aka tunables) you can do in order to increase cache utilization, but I can't really guide you about it.
It's probably worth a dedicated thread.
 

AltecBX

Patron
Joined
Nov 3, 2014
Messages
285
Good. I'll hold on to buying more ram then. Looks like the 64GB of ECC was a wise decision while building my rig.

While Resilvering, can I add more files to the server or is it best not to add data while Resilvering?
Also, I've been replacing one 18TB drive at a time (once a day). Once I'm done with Vdev1, can I Resilver 2 discs at a time when is time to replace Vdev2?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Good. I'll hold on to buying more ram then. Looks like the 64GB of ECC was a wise decision while building my rig.

While Resilvering, can I add more files to the server or is it best not to add data while Resilvering?
Also, I've been replacing one 18TB drive at a time (once a day). Once I'm done with Vdev1, can I Resilver 2 discs at a time when is time to replace Vdev2?
It's best to let the resilver do its thing, but you can continue to use the server as you did (altough with performance drops).
You can do 2 at a time with RAIDZ2, but you lose any safety margin: if you lose a disk while resilvering 2 others, you lose your pool.
 
Top