ECC-enabled build recommendation

Status
Not open for further replies.

Nomad

Contributor
Joined
Oct 14, 2013
Messages
125
Here's what I have...

32GB ECC RAM
Supermicro X9SCM-F motherboard(with IPMI and dual Intel NIC)
e3-1230v2 Intel Xeon CPU.


What is your zpool setup that you are scrubbing around 1GB/sec? I'm only pulling about 200MB/s and it's going to take me about 4-5 hours to finish.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I had an 18 disk RAIDZ3(note that this is not recommended at all!) and I had an Areca card with a read-ahead cache on-card that seemed to cause massive increases in performance. Now that I'm on the M1015 + Intel SAS Expander I get only about 500MB/sec with the same pool. :(
 

Nomad

Contributor
Joined
Oct 14, 2013
Messages
125
I had an 18 disk RAIDZ3(note that this is not recommended at all!) and I had an Areca card with a read-ahead cache on-card that seemed to cause massive increases in performance. Now that I'm on the M1015 + Intel SAS Expander I get only about 500MB/sec with the same pool. :(


So still running that not recommended at all pool :D

Once I hit 90% of 6TB I'm going to get the following:

-2x3TB WD Greens, for whatever I can sell them for.
6x3TB WD Reds. Will be returning 2 just using them for a Raid 0 for temp storage migration.

2 I have + 4 I'll be keeping should be able to run.

6x3TB RaidZ2 - Should double my space again to 12TB right? No idea what the speed will be on it, just hope it won't be slower than my current setup. Everything is still on-board. Would you recommend getting a M1015? While I could run 8 drives instead of 6 will I see any "REAL" speed difference during scrubs? I'm already well over what the LAN can handle.
 

petr

Contributor
Joined
Jun 13, 2013
Messages
142
Supermicro X9SCM-F motherboard(with IPMI and dual Intel NIC)
e3-1230v2 Intel Xeon CPU.
h.

Sorry for resurrecting this thread - but I've got a question, could you just put the CPU into the board or did you need to use older CPU gen and update to Bios version 2? If it just worked, when did you buy the board?

I've ordered the same combo but after reading the fine print I may be needing to use another CPU for the BIOS update, which would be a bit of a pain in the ass..
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I just put my CPU on the board and it worked. I bought mine summer 2013.

The X9s are not the latest gen, so you shouldn't have to worry about a BIOS update.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
IPMI is the shiznit. Its like doing remote desktop to your server, but you can do poweron/poweroff, access the BIOS, boot from ISOs on remote machines without a cd-rom on the actual server. It's just amazing.
I was able to flash new firmware on a LSI card via IPMI on a remote server. It's too bad every server board doesn't have it by default.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
So still running that not recommended at all pool :D

Once I hit 90% of 6TB I'm going to get the following:

-2x3TB WD Greens, for whatever I can sell them for.
6x3TB WD Reds. Will be returning 2 just using them for a Raid 0 for temp storage migration.

2 I have + 4 I'll be keeping should be able to run.

6x3TB RaidZ2 - Should double my space again to 12TB right? No idea what the speed will be on it, just hope it won't be slower than my current setup. Everything is still on-board. Would you recommend getting a M1015? While I could run 8 drives instead of 6 will I see any "REAL" speed difference during scrubs? I'm already well over what the LAN can handle.
If your scrub times are problematic you could try increasing the VDEV read ahead cache size. This reduces scrub times and metadata-intensive tasks for a small cost in RAM.

You can check your current settings:
Code:
sysctl -a | grep vfs.zfs.vdev.cache


Then add the tunable to sysctl:
Code:
vfs.zfs.vdev.cache.size="64M"
vfs.zfs.vdev.cache.max="65536"
and reboot. kick off a scrub and see if it's faster
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If your scrub times are problematic you could try increasing the VDEV read ahead cache size. This reduces scrub times and metadata-intensive tasks for a small cost in RAM.

You can check your current settings:
Code:
sysctl -a | grep vfs.zfs.vdev.cache


Then add the tunable to sysctl:
Code:
vfs.zfs.vdev.cache.size="64M"
vfs.zfs.vdev.cache.max="65536"
and reboot. kick off a scrub and see if it's faster

Have you actually tried those? I have, and you'll be disappointed to hear that those were "retired" (they still function afaik) but they aren't used anymore because the new pre-fetch is SOOO much more efficient that the vdev caching was retired because it was such a waste of resources (RAM) when put up against the prefetch cache.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Have you actually tried those? I have, and you'll be disappointed to hear that those were "retired" (they still function afaik) but they aren't used anymore because the new pre-fetch is SOOO much more efficient that the vdev caching was retired because it was such a waste of resources (RAM) when put up against the prefetch cache.
No I haven't tried them I just remembered that you could supposedly increase scrub speed with them. I don't have a problem with how long my scrubs take.:)
 
Status
Not open for further replies.
Top