Can my current system handle my new hard drives?

EvanVanVan

Patron
Joined
Feb 1, 2014
Messages
211
A few years ago I replaced my original 8x3TB drives with the current 8x10TB drives (in RAIDZ2)... I wasn't thrilled having spent a shit load of money to end up with a pool that was already 25% full but I was hopeful it would take me a while to fill the space. I was not expecting an alert warning me about "optimal pool performance" once the drives hit 80% usage though (or to hoard 4K remuxes lol). I don't want to do that again lol.

I just bought 8x18TB drives to add as a new pool in RAIDZ2 in addition to my existing 8x10TB drives. But now I'm realizing that existing 32GB of ram probably isn't sufficient. You can see my current hardware in my signature. Unfortunately, this motherboard also maxes out at 32 GB of ram.

So should I upgrade everything? And if so, how much ram should I get for approximately 168TB of storage between the two pools?

I already bought another case, Fractal Define 7 XL, that holds 18 drives. I was planning on moving everything over and using another M1015 card for the 8 new drives. On the plus side, if I get new motherboard and ram I can do full system testing on the new hardware without affecting my current system before moving over the drives.

Thanks
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
short answer: yes.

long answer:
unless you have a ton of users or VMs or dedup something, this should work fine for a home server.
you will be, of course, limited in your ARC space but if you aren't noticing it now, it shouldn't matter.
you can plan a hardware upgrade later, and use the replaced board for a backup server or something.
 

EvanVanVan

Patron
Joined
Feb 1, 2014
Messages
211
Thank you.

I'm still considering upgrading, at least pricing things out lol. I'm looking at a Xeon E5-1650v4 and X10SRi-F.

I'm confused by Supermicro's tested memory list though. For DDR4 1.2V-2400 - 32GB there are only two options, the "low profile," which seems to be EOL and unavailable) and the "very low profile" which comes as a single stick (instead of a kit). Because it's certified and manufactured by Supermicro, would I just buy 2 or 4 single sticks rather than a kit?

Am I OK looking at other RDIMM DDR4-2400 kits?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Am I OK looking at other RDIMM DDR4-2400 kits?
eh, i just get stuff from ebay. kingston/samsung/hynix will usually just work. trying to find things from the "supported" lists is usually just a huge pain on older boards, cuz the parts are no longer made.
Xeon E5-1650v4
not what I would get. they are somewhat niche and not really that useful afaik. cant use lrdimms either iirc
 

EvanVanVan

Patron
Joined
Feb 1, 2014
Messages
211
not what I would get. they are somewhat niche and not really that useful afaik. cant use lrdimms either iirc
Lol ok, something to think about, I had come across that one from the Hardware Recommendation PDF and a decent cpu mark score

Xeon E5-1650 v3/v4
The Xeon E5-1650 is a popular six-core model. The v3 and v4 models use the LGA2011-3 socket. Lower core-count versions such as the E5-1620 exist. Xeon E5-16xx CPUs support single-socket systems only and do not support LRDIMMs.

But you know what, I had an LGA 2011 CPU way back when that I found the same thing about it being niche lol. TY
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I was sure to say "not what I would get". if it suits your needs, it could save you some dough.
 

EvanVanVan

Patron
Joined
Feb 1, 2014
Messages
211
Is the Supermicro X12 platform still cutting edge (in a negative way) or is it pretty well supported by this point? I've seen @jgreco advise against the latest and greatest for that reason several times before (even living on the edge with the X12 a year ago), but maybe X13 is more of a concern by now?

I think I'm upgrading to the following:

Motherboard: SUPERMICRO MBD-X12STL-F-O
CPU: Intel Xeon E-2324G
RAM: 128 GB (4x32) - Supermicro (Hynix) 32GB 288-Pin DDR4 3200 (PC4-25600) Server Memory (MEM-DR432MD-EU32)

The only issues I see (that I don't truly understand) are the lack of hyperthreading support (but I like the iGPU) with that CPU, the unbuffered ram, and the X12 platform.

Also, please could someone help me with a CPU fan for that motherboard? I have this page: https://forums.serverbuilds.net/t/official-cpu-heatsink-recommendations/4624 but am still confused on what's compatible...most of the options I checked for /YES backplate on 1155 don't appear to actually be compatible (which I think they should be with an 1200 socket).

Noctra's compatibility list shows none of their coolers work because of the backplate. Supermicro's own heatsink matrix shows a single X12 option (which actually other than the price, I don't mine a AIO cooler if it would be beneficial.) but then the Xeon E-2300 series CPU doesn't even show up as supported. Any suggestions?

Thanks

1684180050715.png
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I've seen @jgreco advise against the latest and greatest for that reason several times before (even living on the edge with the X12 a year ago), but maybe X13 is more of a concern by now?

I think the major X12 issues have been hammered out. Since the E-23xx (low end) CPU's do not have E-cores, I don't think there's any scheduler issues there. I have successfully run ESXi 7.0u3 on the E-2388G system I've been playing with, it's listed as compatible by VMware, but I'm pretty sure I also brought FreeBSD 13 up on the bare metal too at one point in response to someone asking on the forum. I may recall that the graphics were problematic.

Recent chat on E-core scheduler stuff here:

 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
But now I'm realizing that existing 32GB of ram probably isn't sufficient.
I don't see it that way. If 32GB RAM is working for you now and you are only increasing storage capacity, add the new drives and see how it works.

Here is how you know you are running out of RAM... Look at the SWAP Utilization, the USED value should be a zero value. If you are using SWAP Space often, I would say you do need more RAM.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Here is how you know you are running out of RAM...

That's how you know you are running out of RAM on a UNIX system. This has ZFS. As you add disk space to a ZFS system, it has to keep track of more and more metadata in the form of the space maps; each vdev you have is split up into about 200 regions, and the space maps are effectively a way to track freed / allocated space, using something that is similar to an append log. The important bit here is that as a region fills, the associated space map grows, taking more memory, and ZFS wants to fill its vdevs approximately equally, so it tries really hard to keep the space map metadata in ARC. The system therefore becomes more stressed as the amount of disk space is increased, especially since ZFS has a tendency to grab entirely empty regions of free space first, because that's fastest, and as the disk space fills, it becomes necessary to do more work on more space maps to find reasonable blocks of space to fill an allocation request. This is slow, and, if that space map isn't in ARC, also may need to be pulled from the pool.

This is why adding more disk space to a ZFS system should be coupled with more memory, and is part of the backstory behind the conventional wisdom that you should have 1GB per TB of disk space. You may be able to do with something less, but it is a good idea to shoot for that sort of scaling.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Thanks for teaching that to me again. I'm just curious if the OP will actually need additional RAM given the situation. I'd hate to tell him that you should upgrade your system before he even knows for certain if the upgrade is needed. For this OP, a new motherboard is required to add more RAM.

EDIT: I still like the UNIX Swap Utilization method, even if I'm incorrect. I still feel if you do see SWAP Space being used, you know you need more RAM. Is that incorrect? And sorry for giving out mis-information. Not my goal.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm just curious if the OP will actually need additional RAM given the situation. I'd hate to tell him that you should upgrade your system before he even knows for certain if the upgrade is needed.

Well, if you add a TON of disk space to a system with very little RAM, it may fall over immediately (or almost immediately). You may recall that part of the reason for the 2G swap per disk on FreeNAS was to provide space to allow ZFS import to work even under RAM starvation conditions, which is a similar but not technically related problem that has to do with metadata scaling as disk space increases. Worse, as the pool fills and the space maps grow, you get to this point where Cyberjock noticed that "performance just falls off a cliff" and that is very likely because necessary metadata isn't in ARC. As long as you're fine with write performance falling off a cliff, I suppose you could just disregard it and suffer the slowness. It's annoying that it's hard to nail this down to something more specific. I came to respect the 1GB per TB thing a bit more as I ran into this on production VM FreeNAS systems and could easily experiment with RAM amounts.

still feel if you do see SWAP Space being used, you know you need more RAM. Is that incorrect?

No, that's certainly correct. If you are swapping, then it is likely ZFS has given up as much ARC as it is able to, AND you still have a memory shortfall. You are in a pile of hurt. Unless you look and maybe it is just trite stuff like sshd etc and you only have a little bit swapped.

And sorry for giving out mis-information. Not my goal.

Let's call it incomplete information. It wasn't exactly wrong; while swapping is a sign to ADD RAM NOW!!!!, a more nuanced approach is available if you understand the dynamics of it all in a bit more depth.
 

EvanVanVan

Patron
Joined
Feb 1, 2014
Messages
211
Interesting, thank you both. I am not currently using any SWAP space. But have been considering upgrading for a while now anyway lol. My current MB also doesn't have space for a 2nd HBA card in addiction to my 10Gb nic card.

I'm also trying to track down Wireguard tunnel/SMB issues that recently popped up, around when I hit the 80% usage (now 81%) and I'm wondering if maybe this has something to do with it. I just knocked it back to 78% to see if it helps. It kind of sounds plausible reading all of the above.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Great case.
With HDDs you don't have to worry about saturating your 10Gbps NIC.
My personal limit is 85%, but you really don't want to go near 90%.
Having 64 GB of RAM will also allow you to consider using L2ARC: unlocking doors is always a good thing.

As a side note, 8x18 TB drives in RAIDZ2 makes me feel... slightly anxious.
 

EvanVanVan

Patron
Joined
Feb 1, 2014
Messages
211
Why would you need a 2nd HBA card? With two SAS expanders, a single HBA can handle several dozen drives.... just sayin'
Huh, maybe I'm mistaken on the terms or something lol.. I bought a 2nd M1015 card and two 4-sata breakout cables (https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout)? That's only 8 drives, each, no? (That's what I have working currently, and just bought the same stuff.)
Great case.
With HDDs you don't have to worry about saturating your 10Gbps NIC.
My personal limit is 85%, but you really don't want to go near 90%.
Having 64 GB of RAM will also allow you to consider using L2ARC: unlocking doors is always a good thing.

As a side note, 8x18 TB drives in RAIDZ2 makes me feel... slightly anxious.
Nice, thank you. I guess I could consider sparing an 18TB drive for RAIDZ3 lol..I figured I would spend several weeks testing everything and burning-in the drives before putting them in service though lol.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Nice, thank you. I guess I could consider sparing an 18TB drive for RAIDZ3 lol..I figured I would spend several weeks testing everything and burning-in the drives before putting them in service though lol.
I made math on the fly (based on this)... considering URE 1e-15 and a drive failure rate of 3%, at 80% full (14TB) the pool's probability of data loss should be not lower than 55%.

And WD's RED PRO disks have an URE of 1e-14. Source.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's only 8 drives, each, no?

No. Each of your SFF-8087 connectors on the HBA can separately attach to an SAS expander, such as the classic Intel RES2SV240 24-port jobber; your uplink to the HBA takes up 4 ports, so each expander can then talk to 20 drives, making your HBA capable of 40 drives total. Don't be fooled by the PCIe connector, it's only used for power and you can tape these things inside a chassis and power them separately if you don't have available slots. See the SAS Primer and search for "expander" --


Your HBA will have some ultimate limit to the number of SAS devices it can handle in the topology, some older ones are capped at 32 or 64, but newer ones such as this 3008 can handle 240 (255 actually, but you want to leave a few available for enclosure mgmt and stuff):

Code:
# sas3ircu 0 display
Avago Technologies SAS3 IR Configuration Utility.
Version 16.00.00.00 (2017.04.26)
Copyright (c) 2009-2017 Avago Technologies. All rights reserved.

Read configuration has been initiated for controller 0
------------------------------------------------------------------------
Controller information
------------------------------------------------------------------------
  Controller type                         : SAS3008
  BIOS version                            : 8.37.00.00
  Firmware version                        : 16.00.10.00
  Channel description                     : 1 Serial Attached SCSI
  Initiator ID                            : 0
  Maximum physical devices                : 255
  Concurrent commands supported           : 3072
  Slot                                    : Unknown
  Segment                                 : 0
  Bus                                     : 11
  Device                                  : 0
  Function                                : 0
  RAID Support                            : No
------------------------------------------------------------------------


So you can just keep daisy chaining these suckers. You are limited by the 24Gbps SAS link speeds of the SFF8087 though. You can find SAS expanders with more than 24 lanes which is probably the way to go if you want to get crazy like this.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Why would you need a 2nd HBA card?
Huh, maybe I'm mistaken on the terms or something lol.. I bought a 2nd M1015 card and two 4-sata breakout cables (https://www.amazon.com/Cable-Matters-Internal-SFF-8087-Breakout)? That's only 8 drives, each, no? (That's what I have working currently, and just bought the same stuff.)
there's nothing wrong with this, just that there are other options available in the SAS topology. it ties up a second pcie slot, but if you have lots *shrugs*
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
it ties up a second pcie slot, but if you have lots *shrugs*

Poster already indicated "current MB also doesn't have space for a 2nd HBA card" .... which is why I was explaining you really don't need a 2nd HBA (unless you're up in the hundreds of drives).
 
Top