BUILD New build - power vs. memory capacity

Status
Not open for further replies.

MagicSmoke

Cadet
Joined
May 14, 2016
Messages
4
I live in an apartment (1-user) so power usage and heat output is kind of an issue. I can really only dole out 700W continuous or so to bulk storage without doing something drastic like an extension cord to the kitchen for more power. I'm a student so, large one-shot purchases are kind of a pain to arrange. I'm also in canada so no great cheap shipping deals to be had.

So I've filled up my first freenas server. It's pretty much a pure media storage server for long term storage. The only thing it does is sit there, accept files, serve them back without transcoding, back them up to the cloud and scrub weekly:
i3 2120, 32GB ecc, 24-bay Norco rpc-4224 of 2,3 and 4TB of mixed spinners in 3 zpools by drive-size. I've also just bought an SE3016 to tack on 16 more drives to it. I already have 6 more 3TB drives (I'll be buying 2 more to finish the zpool).

The problem is that I now have 6*3TB+2*4TB worth of homeless internal drives lying around, filled with data and I'll need to empty out the 6*3TB to actually build a zpool out of it.

I have a spare lian li v2000 and 2*5-in3s to stick in it for a total of 22-bays of 3.5". I'd like to build this into another freenas. I'll likely add an expander or more at some point in the future. With previous generation hardware at an all-time low price, yet still offering more than sufficient performance, I'm conflicted on what to get/do.

My available choices are, in increasing order of price:

1) Westmere dual socket for a cap of 192GB memory. (1k will get me a supermicro 36-bay chassis with cpus, mb, some starter memory, controllers and best of all, includes redundant PSUs, shipped to my door)
2) SB/IVB single socket desktop-class for 32GB (this is around 800$)
3) skylake/whateveriscurrent single socket desktop-class for 64GB (didn't look because I couldn't quickly find any cheap set ups)
3) SB e5-2670 single for 256-512GB (somewhere around 1300$+)
4) current/previous gen single MP-xeon [probably not farm from 5)] 256GB-1TB
5) SB e5-2670 dual for 768GB (budget busting 1700-1800$+)
also possible) waiting until the next generation of chips are dumped from the cloud.


currently, all my fast storage and commonly used items are on my desktop which at some point is archived and backed up, but that may change in the future.
I keep my systems for a long time. This would probably be the only other server I run due to power and heat issues for the foreseeable future (6-8 years), or until a way more efficient architecture comes out. I'll just continue to add expanders.

The westmere systems to me is the best choice, save for the fact they guzzle power like it's free. Using 0.30$/kWh wouldn't be wrong for power consumption, because I'm paying extra to cool it. I'm basically on AC 8 months a year in Alberta, Canada, and heat is included in the rent, so it's still wasted electricity.

In the context of future proofing, more than 32GB seems like a requirement. Westmere's 192GB cap makes the most sense in terms of balance, but will run much hotter than a single socket desktop-class current gen. something like 100-150W more for the extra processor and the rdimms I think? that's something like 400$/year of wasted power.
I'll be on gigabit until faster interfaces draw way less power.


tl,dr: Basically my question boils down to under what conditions would you benefit from more memory at breakpoints such as 32GB, 64GB, 96GB, 192GB, 256GB, 512GB, 768GB, 1TB (or if you could magically tell me how much memory I'll be happy with for the next 8 years) .
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello, and Welcome to the Forums.

I share some of your thoughts. While in my case it's more of ‘wow – I could’ve gotten a lot more bang for the buck’ if I would’ve gone a different hardware route.
If you intend to use the machine as a host for other duties you may free up other servers from consuming overhead power. But in reality, think twice on how much power you’d be “saving”. I went through this question myself, ALMOST instant-buying a monster machine without knowing about power consumption details. Which are indeed important. I chose to test out mycurrent hardware (i3-6100 = about ½ Xeon E3-1230v5) for my purposes and it worked way better than expected. Testing current hardware was probably my best call since getting onboard FreeNAS….

You’ve already realized older gear draws more current than more recent architectures. I wonder though, have you looked at real world measurements? I did some research on E5-2680s and found out that an idle system would consume in the neighborhood of 180-230w including 1 or 2 hdds. That is beyond insane for any home file server! An Xeon E3-1230v5 idles at round 16w. That gives a hint of the differences at stake. While still being concerned with ram limitations, the E5-1620v(3?4? – I forgot) which would enable you to get to ~512gb, consumes about twice the power of a E3-1230v5 at idle, while being comparable in performance overall. The more I read and learn about server grade hardware, the ever so popular E3-1230’s are real beasts. (numbers from the top of my head).
Regarding your options, since power seems to be of real importance to you. That would at least in my eyes pretty much delete options of any Dual setup, and any high core count cpu.

Memory. What makes you look for RAM capacity in the 3 digits?
ZFS's need for memory is obviously growing with pool size, but from what I’ve understood – the requirement is by no means linear. In our cases, single users with a fetish for storage, without needs for massively fast storage to host numerous VM’s or squadrillions of enterprise users – ZFS does not tend to scale aggressively with pool size. In other words, for our use, we’ll be ‘fine’ with rather limited amounts of ram. A number I've encountered document is ‘to around 100Tb, 32Gb is needed to keep ZFS stable at the very minimum’.

For a lifespan of at least 5 years, maybe more, I would not settle for anything less than 64gb. Which is also the trade off point where you’ve access to really energy efficient and cheap hardware, in particular with higher core clock speeds which benefit Samba. (While looking ahead on what's going to happen with storage over the next years - SSDs may have already overtaken hdds in the 'sweet spot for price' - which will pose a new set of challenges for our hardware. What was enough today for a given set of drives, probably will not apply to SSD's in 5-8 years. We've no idea on how fast rotating rust will diminish as a valid option. Hence - making this upcoming storage revolution ever so difficult to predict for long term)

Reading between your lines, I’d suggest the following:
Get a cheap Skylake system. X11-SSL, pop in a single 16Gb stick for the moment, add either a G4400 or an i3-6100. This gets you to 64Gb RAM, gives you lowest power consumption and by that – probably also most piece of mind.

If there are any needs to correct/nuance my statements on ZFS please fill in.
Cheers /
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
A number I've encountered document is ‘to around 100Tb, 32Gb is needed to keep ZFS stable at the very minimum’.

That's ... optimistic. Probably not that wise to push past 64TB, though if you are only looking for archival class storage you could probably head on out there towards 100TB.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
That's ... optimistic. Probably not that wise to push past 64TB, though if you are only looking for archival class storage you could probably head on out there towards 100TB.
Thanks.
To be overly clear - are you refferring to unformatted RAW space, or zpool space?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks.
To be overly clear - are you refferring to unformatted RAW space, or zpool space?

"Yes."

Once, a long time ago, I bumped up the minimum RAM requirement from 6GB to 8GB. At the time, one of the other things I did was to vague-ify the memory "rule" because what was there was not really good.

A home user who hopes that "1GB per TB" means 1GB per TB of usable space will arrive at a reasonable conclusion for a home user usage profile because he'll assume his 4 x 4TB RAIDZ2 should have 8GB.

A pro user who sees "1GB per TB" is more likely to be wondering how much memory needs to be added to the system, and *possibly* will come to the above conclusion, but is more likely to think "RAM is cheap" and end up with 16GB. The extra RAM will make the pool go faster, which is correct for a commercial/professional setting.

The reality of it is that you can quite possibly even get away with it being 1GB per TB of *consumed* space, as long as the pool isn't so ungodly large as to totally swamp the rule. But what you're trading off here is actually speed. For example, I experimentally ran a 30TB pool on 6GB of RAM and noted that speeds dropped to a fraction of what they are at 32GB RAM. Pool probably had about 12TB of consumed space on it at the time.

There's a whole bunch of factors involved in determining how much RAM you *actually* need. But RAM's getting cheaper, just this morning I was pricing out DDR4 2400 32GB sticks at $153. WOW.

I think the biggest problem is that a 32GB platform (older E3's etc) ends up being a little scary because if you actually manage to get yourself into a crisis where the pool import wants to chew through lots of RAM, you might potentially run out.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Heheh...
I kind of read between the lines to pick up the point.

I thought about the path of upgrading during today's evening walk. It's a topic that's been bothering me since the day I found out about ebay'ed E5's...
Other than RAM limitations, other aspects come into play that sometimes are neglected in the light of RAM capacity. If the goal is to cram a boatload of drives into a single box a single PSU will be another bottleneck that can't be neglected. Especially not once deep into the lands of +20 drives where regular ATX PSU's tend to no longer be applicable. So I came to think about 'at what point would it be better/easier/cheaper to get a second box, rather than one large?' granted all the problems that kind of arise at the same time. From that perspective, memory is one factor, but PSU sizing comes into play big time too, sky rocketing budget requirements. Ie, which would make a move from E3's to E5's a lot less desirable than at (my) first glance.
If playing around with the example of using 4TB drives, 8 drive wide raidz2, filling a 24 slotted box would generate about 60TB of usable space, give or take. At that point u'd be a happy camper on a 64GB E3 system. Projecting a couple of years ahead and doubling the drive size, things look a bit different...

Enough rambling for tonight,
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
At the point where you're over a dozen drives, you really ought to be looking at the rack mount chassis designs, simply because they'll be built to handle the drive situation with less drama.

The eBay E5's are indeed attractive, and one has to consider that even if you were to invest in some Xeon D, the price on those is unforgiving, and they're non-upgradeable which is a real annoyance. On the other hand, power, heat, these things suck.
 

MagicSmoke

Cadet
Joined
May 14, 2016
Messages
4
I have looked at real world measurements, there's such a huge variety of configurations that they're hard to compare directly.

My current 24-bay i3 2120 is idling at 192W from the wall. the drives are probably taking slightly more than half, fans are around 35-40W. 4 sticks of ddr3 is around 10-16W or so. so around 40W for the processor+motherboard+2 M1015+expander card.
It currently has ~80TB of raw drives attached for 48TiB or so actual storage. with the new 16-bay expander, it will be 24TB + 8 future drives of extra raw capacity. if possible, I'll probably add another expander to it later, the SE3016 can be had for a really great price these days.

It doesn't look like the desktop-grade will increase in memory capacity anytime soon. I'll probably add more drives so more memory seems like a better idea unless 250-400TiB of actual storage on 64GB of ram sounds viable? The competition will probably between the single socket E5 and westmere systems so trying to find a really cheap E5 is probably the most ideal.

mrrackables on ebay has supermicro systems with nehalem internals for dirt cheap, which is the only real reason I'm considering westmere at all. if you deduct the cost of the chassis and controllers, you end up with a free computer.
it's buying an expander chassis earlier than I need it instead of buying an actual system.

for the future, it wouldn't just be the base 22 slots. It'd be with one or more supermicro expanders hooked it with 40-80 drives potentially hanging off of it. (I don't actually need all of my storage to be online at any given time so I'd probably just leave some of it offline.)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think the question has to be, how pissed off will you be if you put 400TB of HDD on a 64GB system and then find yourself unable to import the pool at some point?
 

MagicSmoke

Cadet
Joined
May 14, 2016
Messages
4
I think the question has to be, how pissed off will you be if you put 400TB of HDD on a 64GB system and then find yourself unable to import the pool at some point?

I use a bunch of tiny (8-10 drives or less), seperate pools for storage, would that be an issue? right now, I have 48TiB across 3 pools. (10,8 and 6 drives)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, the upside is that that's going to be a lot easier to recover from on a small RAM system if you run into one trashed pool that's having problems with importing.

The downside is that having multiple pools is going to be very stressy on the host platform, so you could be more likely to get into a situation where you run out of some critical memory resource and the system panicks.

That second one is somewhat hypothetical, but ZFS pools essentially rob space in memory from each other.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
for the future, it wouldn't just be the base 22 slots. It'd be with one or more supermicro expanders hooked it with 40-80 drives potentially hanging off of it.
I'll probably add more drives so more memory seems like a better idea unless 250-400TiB of actual storage

I didn't realize your plans were so far reaching in terms of size.
The case you are sort of describing, 64GB won't do for a 'central host' to feed a bunch of JBOD boxes. Definitely not on larger drives.
While it is cool to imagine a system host and several JBOD's hanging off a single system, it is probably worth posing the question of viability in this model vs building several independent rigs. In particular since you've specifically mentioned there was no need to have all data online.
As part of my suggestion above, my argument attempts to bounce off the idea that increased overhead costs of motherboards/ram capacity that can handle such amounts of storage could <potentially> be offset by building less advanced individual boxes, over time.

I followed the 36bay sale on mrrackables and noticed one get picked up just a while after your post.. was that you? ;)
 
Last edited:

MagicSmoke

Cadet
Joined
May 14, 2016
Messages
4
well, a chassis and expander is 22+16 with a SE3016 or 22+(36,44,or more) with a SC847

I didn't buy a 36-bay yet.

if I had multiple systems, I'd just have a bunch of systems depreciating in value, it would also (likely) cost more as well as increase costs for side resources such as UPSes and networking.
 
Status
Not open for further replies.
Top