What is the "normal" checksum error rate?

Status
Not open for further replies.

globus999

Contributor
Joined
Jun 9, 2011
Messages
105
Hi,

I am curious, what is the "normal" checksum error rate in a raidz1 array?
Now, I am not talking about un-recoverable data errors, just checksums that are auto corrected by ZFS.
When you run a scub, do you see a few, some, many checksum errors?
Always? Never?
I am just trying to get a sense of what are the standard expectations and how much off or in the norm I am.
Any data would be appreciated!
 
Joined
May 27, 2011
Messages
566
0 is the normal checksum rate. i see failed checksums when i failing drives, and that one time when i accidentally remove the wrong drive.
 

Brand

Moderator
Joined
May 27, 2011
Messages
142
The only time you would see ZFS checksum errors would be if something was wrong, usually with a hard drive or RAM. It is strongly recommend to use ECC RAM when running ZFS. ECC Ram is usually standard with quality servers and workstation class computers.
 
B

Bohs Hansen

Guest
@Brand, unless you spring for a server board, you wont have much luck getting ECC RAM to work though :) Most people that will use a system like this will use it on "normal" hardware.
If you setup a "real" server you know your way around to manually setting up your shares and users from the CLI anyway - most of the time at least. :p

(ofcourse there are exceptions and cheap boards that support ECC as well, but don't expect it on consumer boards)
 

globus999

Contributor
Joined
Jun 9, 2011
Messages
105
0 is the normal checksum rate. i see failed checksums when i failing drives, and that one time when i accidentally remove the wrong drive.

Thought so... I think I have a slightly bad controller (Sil3114 based), getting a Promise one. It is *very* unlikely that 4 completely different hdd's would show intermittent chksum errors because they are all bad.
 

globus999

Contributor
Joined
Jun 9, 2011
Messages
105
The only time you would see ZFS checksum errors would be if something was wrong, usually with a hard drive or RAM. It is strongly recommend to use ECC RAM when running ZFS. ECC Ram is usually standard with quality servers and workstation class computers.

Yeah, in my case I think it is the controller. Wrt ECC, nope can't do. No budget. Yes, I know ECC is nice (cheaper too!) but don't have the moola to afford a server - quality mobo.
 

Brand

Moderator
Joined
May 27, 2011
Messages
142
Brand, unless you spring for a server board, you wont have much luck getting ECC RAM to work though :) Most people that will use a system like this will use it on "normal" hardware.
If you setup a "real" server you know your way around to manually setting up your shares and users from the CLI anyway - most of the time at least. :p

(ofcourse there are exceptions and cheap boards that support ECC as well, but don't expect it on consumer boards)

You are correct in that the motherboard has to support ECC RAM. I just assumed that was well known but I guess not, thanks for pointing it out.

I do not know what most people are using for their FreeNAS builds and going from the information contained within the forums would not be anywhere near accurate. I am sure that their are people who are recycling old computers and turning them into FreeNAS servers or building low budget FreeNAS servers with lower end equipment. I am also sure that there are a lot people here that are building FreeNAS servers with higher quality consumer components and server grade components.

My data is very valuable to me so I decided on ZFS and choose ECC RAM to compliment it. I used the following components:

Case - Lian Li PC-A17B
PSU - Corsair Professional Series HX850 850W
Hard drive cage - 3 x Super Micro CSE-M35T-1B 5x3.5
Motherboard - Asus P7F-E
CPU - Intel Xeon X3440 2.53GHz Quad-Core
RAM - 12GB Crucial DDR3 1333 Registered ECC
Hard Drives - 10 x 2TB Seagate Barracuda LP 5900RPM
USB Flash Drive - 8GB OCZ Rally2

Most of the components are a mixture between low end server components and high end consumer components. Aside from the storage the rest of the components did not cost a lot to build.

Originally the server was running OpenSolaris until Oracle bought Sun and disbanded the OpenSolaris project. I decided to give FreeNAS another try and installed version 8 shortly after it was released.

I wonder if there could be an option within the FreeNAS administration GUI that would allow the submitting of anonymous hardware configurations back to the mothership.
 
Status
Not open for further replies.
Top