WD Reds - 6TB.
Odd. Either the Reds are significantly louder than my 4TB HGST drives, or we've done something else differently (did you open the top vents?), or we just have a different perception of the loudness. I actually have a dB meter, but the server is so quiet I doubt I could even get it to register.
Weird, I can hear it from like a metre away even in a noisy room - not doing any tests right now :(.The WD Reds are the quietest drives I've ever used. However, seeking is the noisier operation a drive can do and high load things like SMART tests and scrubs lead to a lot of seeks so you've probably heard the worst case scenario ;)
Does the case has its own fan and/or power supply? Is noise coming from them?
Are the disks properly mounted? Did you use any rubber or plastic washers or grommets to decrease vibrations?
Is the case solid and stable by itself?
Thanks for this great guide. I am building a new FreeNAS and found value in reviewing your process to learn about memtest86+, SMART, and other tests I should run prior to configuring services, transferring data, and starting to use the server in production mode.
Do you have a quick list of things you recommend (like a short summary)?
That's kind of a broad question. The only thing I can recommend it to spend time reading the links in the first post and this forum in general because you will learn a lot. Going into a FreeNAS build (or any storage solution) without understanding what you are configuring is a good way to lose data.
I have been running a FreeNAS 0.7.x (and prior) for over 6 years... just never got around to upgrading and my existing NAS server was fitting my needs and chugging along without any hiccups. I use normal SMART and scrubs on my RAIDZ1 (4x1TB) but I have not needed to configure or tinker with any of that for a long time.
On top of that, I have not completed a new build for quite a long time too. I wanted to be sure my memory and drives did not have issues by running some checks and burn-in like you have done in your build. I will follow some of those same concepts.
1. Which memtest86+ check did you run?
2. Did you run any other burn-in for your drives? This is the area I am most curious because I do not know exactly what tests would be good to run for checking the drives and also performing some basic burn-in.
I'll contribute to this :)Do you have a windows 10 box there? Could you try the Chelsio S310E-CR NIC in that box? I would love to know if its possible to get it up and running in W10.
SFP+ twinax is fairly likely to work, but not absolutely guaranteed. The only thing that I would expect to work as consistently as possible is to actually get the proper SFP+'s and to use fiber. 10GigE has a very good interoperability track record within the realm of hardware we've been suggesting to users.
I saw a 5524 on eBay that went for $200 the other day. Wish I could have picked it up, but my wife would have killed me for dumping any more money into this project of mine (at least for now).
I just wanted to agree with you:)
I don't know why would one spend $79 on short cooper twinax if you can get 2 optical transiver for $20 each and fiber for another $20 which totals around $60 and will give the proper setup. Dell switch will have its supported dell optics, your 10Gb nic Chelsio will have it's chelsio optics and fiber between, instead of putting a trip-lite twinax between that will make both devices unhappy !:) And is not even much cheaper this twinax.
I'm assuming that $79 figure was pulled from my original post - the $79 was for 2 Chelsio 10Gbps cards and a twinax cable. The twinax cable only cost me $20. Twinax isn't appropriate for all situations, but for a short run between my 2 boxes it has worked perfectly fine and was the lowest-cost option. The only change I've made to my original config was to replace one of the Chelsio cards, because they don't work well in Windows PCs.