BUILD Supermicro 5018A-MHN4 build

Status
Not open for further replies.

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's an interesting choice. Don't you find the 1U format somewhat limited?
 
Joined
Oct 21, 2014
Messages
3
Limited in what way? This is for a home server for image storage and some backup. It has close to 8TB of usable space with relatively low power consumption. I can always upgrade the drives to 6TB versions for 12TB of usable space. I plan on getting a second unit to put in a data center for off-site replication and some VM storage.

Martin
 

areis

Dabbler
Joined
May 1, 2014
Messages
33
I have a similar setup used for photo storage (Aperture) and Time Machine backup. However, I went with the Supermicro X10SLM-F and a E3 1240 Xeon because I also run Plex. My FreeNAS server works awesome!

This community has been a great help.
  • Read the guides and howtos before you post.
  • Accept constructive criticism.
  • Have thick skin.
Good luck.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Looks like a mini in a 1U form factor to me. With Supermicro vs AsRock and 2578. Nice.

The benchmarks and conclusions based on them are off. Anything with compression is pretty much measuring cpu and bus speeds. You need very large datasets to ensure you've blown out the ARC as well. The ~388MBps numbers (bonnie++, read, compression off) is pretty reflective of your pool and matches common knowledge. I'm a little surprised to see such disparity between your read vs. write and suspect it is testing methodology vs actual performance. If you hunt you'll see the numbers often land at 10-20% or less reads vs writes but much of that is workload dependent. Doesn't matter much as even dual 1GBe is easily saturated.

Nice pictures and site. Welcome.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The benchmarks and conclusions based on them are off. Anything with compression is pretty much measuring cpu and bus speeds. You need very large datasets to ensure you've blown out the ARC as well. The ~388MBps numbers (bonnie++, read, compression off) is pretty reflective of your pool and matches common knowledge. I'm a little surprised to see such disparity between your read vs. write and suspect it is testing methodology vs actual performance. If you hunt you'll see the numbers often land at 10-20% or less reads vs writes but much of that is workload dependent. Doesn't matter much as even dual 1GBe is easily saturated.

You are correct. The results are obviously useless. Benchmarking ZFS is different than benchmarking other file systems and the same methodologies just don't work. I'm sure if someone is interested there's plenty of other people that have tried to use bonnie++ and made some mistakes that make the numbers meaningless and some of it was explained to them. Benchmarking ZFS isn't for amateurs. If you don't know what you are doing you can get numbers that seem somewhat realistic that you can put into charts and graphs and think it means something, but really doesn't mean much more than what you could get from /dev/random.
 
Joined
Oct 21, 2014
Messages
3
I'm a little surprised to see such disparity between your read vs. write and suspect it is testing methodology vs actual performance. If you hunt you'll see the numbers often land at 10-20% or less reads vs writes but much of that is workload dependent.

I'm not sure I understand your comment. What is it you are surprised about?

Martin
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I'm not sure I understand your comment. What is it you are surprised about?
Martin
I was referring to your writes being so slow vs. your reads. However, while I was reading I was thinking of a z1 or z2 array and not your striped mirrors. Got a little thrown by the odd numbers, sorry. The ratio is not interesting.
 
Status
Not open for further replies.
Top