MagicSmoke
Cadet
- Joined
- May 14, 2016
- Messages
- 4
I live in an apartment (1-user) so power usage and heat output is kind of an issue. I can really only dole out 700W continuous or so to bulk storage without doing something drastic like an extension cord to the kitchen for more power. I'm a student so, large one-shot purchases are kind of a pain to arrange. I'm also in canada so no great cheap shipping deals to be had.
So I've filled up my first freenas server. It's pretty much a pure media storage server for long term storage. The only thing it does is sit there, accept files, serve them back without transcoding, back them up to the cloud and scrub weekly:
i3 2120, 32GB ecc, 24-bay Norco rpc-4224 of 2,3 and 4TB of mixed spinners in 3 zpools by drive-size. I've also just bought an SE3016 to tack on 16 more drives to it. I already have 6 more 3TB drives (I'll be buying 2 more to finish the zpool).
The problem is that I now have 6*3TB+2*4TB worth of homeless internal drives lying around, filled with data and I'll need to empty out the 6*3TB to actually build a zpool out of it.
I have a spare lian li v2000 and 2*5-in3s to stick in it for a total of 22-bays of 3.5". I'd like to build this into another freenas. I'll likely add an expander or more at some point in the future. With previous generation hardware at an all-time low price, yet still offering more than sufficient performance, I'm conflicted on what to get/do.
My available choices are, in increasing order of price:
1) Westmere dual socket for a cap of 192GB memory. (1k will get me a supermicro 36-bay chassis with cpus, mb, some starter memory, controllers and best of all, includes redundant PSUs, shipped to my door)
2) SB/IVB single socket desktop-class for 32GB (this is around 800$)
3) skylake/whateveriscurrent single socket desktop-class for 64GB (didn't look because I couldn't quickly find any cheap set ups)
3) SB e5-2670 single for 256-512GB (somewhere around 1300$+)
4) current/previous gen single MP-xeon [probably not farm from 5)] 256GB-1TB
5) SB e5-2670 dual for 768GB (budget busting 1700-1800$+)
also possible) waiting until the next generation of chips are dumped from the cloud.
currently, all my fast storage and commonly used items are on my desktop which at some point is archived and backed up, but that may change in the future.
I keep my systems for a long time. This would probably be the only other server I run due to power and heat issues for the foreseeable future (6-8 years), or until a way more efficient architecture comes out. I'll just continue to add expanders.
The westmere systems to me is the best choice, save for the fact they guzzle power like it's free. Using 0.30$/kWh wouldn't be wrong for power consumption, because I'm paying extra to cool it. I'm basically on AC 8 months a year in Alberta, Canada, and heat is included in the rent, so it's still wasted electricity.
In the context of future proofing, more than 32GB seems like a requirement. Westmere's 192GB cap makes the most sense in terms of balance, but will run much hotter than a single socket desktop-class current gen. something like 100-150W more for the extra processor and the rdimms I think? that's something like 400$/year of wasted power.
I'll be on gigabit until faster interfaces draw way less power.
tl,dr: Basically my question boils down to under what conditions would you benefit from more memory at breakpoints such as 32GB, 64GB, 96GB, 192GB, 256GB, 512GB, 768GB, 1TB (or if you could magically tell me how much memory I'll be happy with for the next 8 years) .
So I've filled up my first freenas server. It's pretty much a pure media storage server for long term storage. The only thing it does is sit there, accept files, serve them back without transcoding, back them up to the cloud and scrub weekly:
i3 2120, 32GB ecc, 24-bay Norco rpc-4224 of 2,3 and 4TB of mixed spinners in 3 zpools by drive-size. I've also just bought an SE3016 to tack on 16 more drives to it. I already have 6 more 3TB drives (I'll be buying 2 more to finish the zpool).
The problem is that I now have 6*3TB+2*4TB worth of homeless internal drives lying around, filled with data and I'll need to empty out the 6*3TB to actually build a zpool out of it.
I have a spare lian li v2000 and 2*5-in3s to stick in it for a total of 22-bays of 3.5". I'd like to build this into another freenas. I'll likely add an expander or more at some point in the future. With previous generation hardware at an all-time low price, yet still offering more than sufficient performance, I'm conflicted on what to get/do.
My available choices are, in increasing order of price:
1) Westmere dual socket for a cap of 192GB memory. (1k will get me a supermicro 36-bay chassis with cpus, mb, some starter memory, controllers and best of all, includes redundant PSUs, shipped to my door)
2) SB/IVB single socket desktop-class for 32GB (this is around 800$)
3) skylake/whateveriscurrent single socket desktop-class for 64GB (didn't look because I couldn't quickly find any cheap set ups)
3) SB e5-2670 single for 256-512GB (somewhere around 1300$+)
4) current/previous gen single MP-xeon [probably not farm from 5)] 256GB-1TB
5) SB e5-2670 dual for 768GB (budget busting 1700-1800$+)
also possible) waiting until the next generation of chips are dumped from the cloud.
currently, all my fast storage and commonly used items are on my desktop which at some point is archived and backed up, but that may change in the future.
I keep my systems for a long time. This would probably be the only other server I run due to power and heat issues for the foreseeable future (6-8 years), or until a way more efficient architecture comes out. I'll just continue to add expanders.
The westmere systems to me is the best choice, save for the fact they guzzle power like it's free. Using 0.30$/kWh wouldn't be wrong for power consumption, because I'm paying extra to cool it. I'm basically on AC 8 months a year in Alberta, Canada, and heat is included in the rent, so it's still wasted electricity.
In the context of future proofing, more than 32GB seems like a requirement. Westmere's 192GB cap makes the most sense in terms of balance, but will run much hotter than a single socket desktop-class current gen. something like 100-150W more for the extra processor and the rdimms I think? that's something like 400$/year of wasted power.
I'll be on gigabit until faster interfaces draw way less power.
tl,dr: Basically my question boils down to under what conditions would you benefit from more memory at breakpoints such as 32GB, 64GB, 96GB, 192GB, 256GB, 512GB, 768GB, 1TB (or if you could magically tell me how much memory I'll be happy with for the next 8 years) .