supermicro chassis build

wajces

Cadet
Joined
Dec 23, 2019
Messages
3
I've run out of physical space, and have been looking into the supermicro chassis. I'm having a lot of analysis paralysis, and all this business grade gear is completely new to me.

I live in Australia which makes sourcing a suitable case difficult and expensive since shipping is $400US. I keep leaning towards the 36 bay cse-847 chassis, and the only real option seems to be this, as they said they don't sell the barebones case (with vertical half-height slots).


This is a backup pc, which is only turned on every couple of weeks to sync. My current tower only has two expansions cards, the hba and a 40gbe NIC.

What options should i change within the listed configuration?

I figure i want atleast 4x the processor power of my presently bottlenecked 2c2t celeron g3930. So perhaps ask for either dual 6 core v2 cpu's (E5-2620?), or a single 8/10 core v2 cpu?
32gb of ram sounds like enough for a backup pc, even though it's listed with 64gb. Should I go single cpu with less ram, or stick to smaller dual cpu's with 64gb?

Should I be asking for the smaller 920w model quiet PSU?

Should I change anything else? Should i just go a cheap consumer board with an older gen ryzen given their insane value/$ these days? I realize these cases are very loud, and will do what i can to quieten it and monitor the effects on hdd temps.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
older gen ryzen given their insane value
no, ryzen is usually desktop grade, not server grade.
if this is just a backup server you don't need 2 procs at all, waste of power, though if you are only turning it on weekly that wouldnt matter much
 

geekmaster64

Explorer
Joined
Mar 14, 2018
Messages
50
I've run out of physical space, and have been looking into the supermicro chassis. I'm having a lot of analysis paralysis, and all this business grade gear is completely new to me.

I live in Australia which makes sourcing a suitable case difficult and expensive since shipping is $400US. I keep leaning towards the 36 bay cse-847 chassis, and the only real option seems to be this, as they said they don't sell the barebones case (with vertical half-height slots).


This is a backup pc, which is only turned on every couple of weeks to sync. My current tower only has two expansions cards, the hba and a 40gbe NIC.

What options should i change within the listed configuration?

I figure i want atleast 4x the processor power of my presently bottlenecked 2c2t celeron g3930. So perhaps ask for either dual 6 core v2 cpu's (E5-2620?), or a single 8/10 core v2 cpu?
32gb of ram sounds like enough for a backup pc, even though it's listed with 64gb. Should I go single cpu with less ram, or stick to smaller dual cpu's with 64gb?

Should I be asking for the smaller 920w model quiet PSU?

Should I change anything else? Should i just go a cheap consumer board with an older gen ryzen given their insane value/$ these days? I realize these cases are very loud, and will do what i can to quieten it and monitor the effects on hdd temps.


I bought that exact listing... it'll be here Tuesday!
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
I myself will eventually be looking at a server upgrade as well. I was doing some looking around at 24bay cases and being in Australia as well I find the choice very limiting. I have one of those TGC-8424 cases. Which has turned out to be complete rubbish, way too much $$$ for what you get...

I found this the other day perusing the internet, I haven't looked into it in detail though:

 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
those looks like knockoff supermicros. quality will probably be proportional to the price. is interesting though, but for me it shows that costing $890 CAD. id rather just get a supermicro.
 
Last edited:

wajces

Cadet
Joined
Dec 23, 2019
Messages
3
I myself will eventually be looking at a server upgrade as well. I was doing some looking around at 24bay cases and being in Australia as well I find the choice very limiting. I have one of those TGC-8424 cases. Which has turned out to be complete rubbish, way too much $$$ for what you get...

I found this the other day perusing the internet, I haven't looked into it in detail though:


Thanks for the link! i hadn't thought to check alibaba. The shipping is a fair bit cheaper ($375 vs $190), and you aren't forced into buying the mobo/cpu/ram. Would be interesting to see if there are any problem areas. I've had great experiences with certain china built components (raid cards, mean well power bricks).

I bought that exact listing... it'll be here Tuesday!

Beat you by a weekend!


no, ryzen is usually desktop grade, not server grade.
if this is just a backup server you don't need 2 procs at all, waste of power, though if you are only turning it on weekly that wouldnt matter much

After messing around all weekend with this machine, testing configurations and trying to learn entirely new systems and software, i miss the simplicity of my 'desktop grade' ryzen 3600 (primary NAS), which is far faster and simpler.

Not picking on your comment, i just see it mentioned a lot, and it doesn't seem like the differences are all that black and white. Most of my issues aren't really freeNAS related.


-----------------------------------------


I'll list the freeNAS related stuff first, in case it helps anyone in the future.

I thought i'd try out 11.3-RC2, but the new GUI completely frustrates me, and i ended up going back to 11.1 with the legacy gui. The legacy gui only takes a few clicks to set up a new array, correct permissions and make it windows accessible. I can get some sense of what's going on hardware wise, with the netdata display. The new GUI wouldn't let me make a new array with missing, or oversize disks, and locating where the options are is still completely foreign to me.

Figuring out throughput issues was another struggle. I currently have a 12x8tb array (2 of which are actually 10tb drives i'm using as temporary stand-ins).

Initially i was hitting some bottleneck of a consistent 580-590MB/s throughput. The issue was related to having a dual socket system. Some step along the way, i think data gets transferred back and forth between cpu's and/or their attached dimms, and a throughput limit is reached. The supermicro system was sent to me with the hba card in the 16x slot, and counter-intuitively, the top 4x pcie slots are all attached to the second cpu. I had to move the hba card and 10gbe card to the bottom three slots, to run in single cpu mode. Running with a single cpu solved the bottleneck issue. Even with the the cards in the bottom slots, i still had throughput issues booting in dual cpu mode.

The second cpu was consuming approx 20w at idle, and 40w when writing to disk. This is with the same 8 dimms installed across 1, or both cpu's.

The end usable TiB capacity also varies wildly with the type of array configured, and i'm not just talking about which disks are lost to parity. Now that i've done initial testing, 12x8tb in raidz2 has given me 64.1TiB usable.

I've read people claim you only get single disk speed performance in raidz(x), which seems completely untrue. Splitting the array did seem to help max throughput, but I don't want to waste a heap of $ on pairty disks. Here's some approximate results from transferring 200GB. Going off memory:
12x1 stripe (0 parity) = 1GB/s write
2x6 stripe (0 parity) = 1.1GB/s write
2x6 raidz1 (2 parity) = 1GB/s write
2x6 raidz2 (4 parity) = 950MB/s write
12x1 raidz (1 parity) = 900MB/s write
12x1 raidz2 (2 parity) = 8-850MB/s write

Testing read speeds has it's own issues. Either the ARC alters results (hits max NIC throughput at 1.14GB/s), or, my present raid controller can't handle the sustained writes, or my nvme drive gets heatsoaked quickly, and slows down dramatically. The biggest destination ramdisk i can create would only hold 20GB, which isn't great for getting an average read speed.

Weirdly, i can write to a zfs array at 1GB/s, yet only read back off it at 800MB/s. I have no idea on what sustained read throughput would be.

Would love to do some further testing when i get to 18 drives. I really want the system(s) to scale to 40gbit speeds.

I've begun syncing some of the data to the 12x8tb raidz2, and you can see it's done about 20TiB in just over 7 hours.

 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
The new GUI wouldn't let me make a new array with missing, or oversize disks,
pretty sure the old GUI doesn't do this either though?
 

wajces

Cadet
Joined
Dec 23, 2019
Messages
3
pretty sure the old GUI doesn't do this either though?

in regards to using missing disks, yeah i think you are right.

If you set up a new volume, and click on manual, then it will let you select unmatched drives, and what type of array you want. I assume it just chooses the lowest capacity and runs that across each drive added.

i didn't see how to pull it off in the latest release, except via the cmd line.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
you're not supposed to pull it off in the GUI, the GUI is KISS. if you want to do weird stuff you need the command line.
also, i think you are using volume where pool is applicable. it gets confusing if you are using the wrong terms.
lowest capacity applies per vdev, if you have multiple vdevs their sizes may vary depending on the disks you feed it.
 
Top