First FreeNAS AIO build log

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
It's been a long time in the making and while this will not be a groundbreaking build log, i thought i would pop it up here for posterity and in the hope that it may help at least one person with something. Even if that something is to not do as i have done ;)

The requirements for my build were simple enough. I wanted to consolidate a lot of bare metal hosts which were performing various tasks onto a ESXi host, while at the same time giving me a NAS that i could easily expand while also grabbing all the advantages that the ZFS file system has to offer.

I took a lot of inspiration from the Norco build that @Stux posted years ago and went from there.

Hardware

Case: X-Case RM 424 - 24 | A 24 bay 4u rackmount case with rail kit
Motherboard: Supermicro X10DRL-I-O
CPU: E5-2620v4 x2
RAM: 32GB (x4 8GB DIMMS) Samsung DDR4 2133 RDIM ECC (M393A1G40DB0-CPB)
PSU: Corsair RMx 1000
HBA: 2x IBM ServerRaid M1015 Flashed in IT Mode (The reason for two of these explained below)
CPU Heatsink: 2x Noctua NH-U9S with NF-A9 92mm fans
ESXi Boot disk: Sandisk Cruzer Fit 16GB USB 2.0 Flash Drive
ESXi Datastore Disk: Samsung 970 EVO 250GB M.2 NVMI SSD
Hard Disks: 6x 3TB WD Reds + various other disks that will be housed in the case but not used within FreeNAS
UPS: APC SC450RMI1U



Build Process

This part is a bit picture heavy so apologies if its too much. I know that i like seeing the pictures so others may too.

The Case

The case itself as you would expect for a 4U case is pretty spacious inside. It can take a EATX motherboard if required.
It came with 5 fans in total. 3x 120mm fans in the fanwall and 2x 80mm exhaust fans.

FreeNAS build-5467.jpg

View of the empty case

It takes a normal ATX power supply so no screaming fans in the removable PSUs that some supermicro cases have. This is a family NAS so no redundancy required as we have no availability SLA's.

The back-plane on the case is easily accessible (especially with the fans out) and has plenty of power sockets.

FreeNAS build-5499.jpg

Plenty of molex connectors on the back-plane

Not sure if they all had to be plugged in but i obliged as i went.


Fans

The fans on the fan wall are mounted in a quick-release housing. I'm not sure if they are hot-swap but it certainly takes the hassle out of having to use screws in a tight space, especially once the case is racked.

FreeNAS build-5468.jpg

Removable Fan

They plug into a little PWM daughter board which in turn connects to the fan headers on your motherboard.

FreeNAS build-5472.jpg

Daughter board for fans to plug into

There are fan headers on the back-plane but i don't know if their speed can be controlled due to lack of documentation so i plugged into the motherboards fan headers.

FreeNAS build-5498.jpg

Fan Headers on back-plane

Currently I have the 2 CPU fans plugged into FAN1 & FAN2 on the motherboard and the fan wall fans are plugged into FAN4, FANA & FANB. I'm not sure if this is correct so feel free to shout if its wrong.

I was expecting 3 pin screamers but they are 4pin so can be controlled and for cheap Chinese fans they move a lot of air and are reasonably quiet.

The exhaust fans on the other hand are very noisy. Using "optimised" fan mode, I haven't been able to get them below 3100RPM using the Supermicro IPMI fan control despite changing the thresholds. For now, they are unplugged and will be replaced with quieter fans if the requirement for more fans arises.


Motherboard

Fitting the motherboard. Not much to say about this apart from after matching up the stand-mounts with the plethora of unmarked holes in the case, I tried to drop the motherboard in but found that It wouldn't fit. The bracket for the two SSD drive mounts fouled the internal USB port and the SATA ports. Very poor design. I have had to remove the drive mount and re-think how i can use it elsewhere should i need to. Luckily, i had decided to use a M.2 NVME drive for the ESXi Datastore so its not a problem for now.

FreeNAS build-5477.jpg

SSD Drive bracket prevented motherboard from sliding right to the edge of the case as it fouled the USB port

These stickers on the power sockets (JPWR1/2) amused me. The manual said they are 8 pin 12v power connectors. The manual on the PSU didn't explain what one 8pin lead vs another is so after a lot of googling i decided on two leads and went for it. I cant remember which ones i chose or why not but so far, no smoke :)

FreeNAS build-5488.jpg

"Danger Danger... High Voltage"

I am using a reverse breakout cable to control 4 disks from 4 sata ports on the motherboard. The SATA controler has been passed through to a linux VM for specific duties not related to the NAS. When Required i have space for another 4 disks to take another row and be controlec by a second reverse breakout cable from the remaining SATA ports.

FreeNAS build-5495.jpg

Reverse breakout cable takes 4 SATA ports from the host to the Mini-SAS port of the target back-plane


CPU & Heat sink

Its been a long time since i fitted a CPU. These E5-2620v4's came up on ebay for an amazing price and they have been running without issue since. Installing them late at night caused my head some initial confusion with the "pull the bar one way then push it another" etc. but they went in. The Heat sinks were a different story.

FreeNAS build-5482.jpg

One of two Xeon E5-2620vs CPU's

When it came to fitting the heat sinks there was a bit of a clearance problem between the two CPU's. My google-foo must have failed me because i had somehow come to the conclusion that these would sit side by side without issue. They do (now) but it was a bit tricky. The heatsinks have to be installed without the fans attached as you need access with the little tool to tighten the screws. Then you fit the fans back on. Unfortunately the gap between the two sockets was so tight, getting the fan on that sits between the two was not pretty. In doing so i managed to slightly bend a few fins towards the bottom of the rear heatsink.

FreeNAS build-5484.jpg

Two Heatsink and fans finally in place

They are on fine and working as expected. They are really silent, even when running at full-tilt. Expensive but worth it. Worth noting that the rear CPU runs about 3 Degrees Celsius hotter than the front. This is my first dual socket build and i assume that this is because it is being fed the warm air from CPU1. I may add a second fan to the rear of CPU2 to work as a push-pull config if the temp difference becomes a problem.
 
Last edited:

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
PSU

The Corsair RM1000x has great reviews and provides plenty of power. Its fan doesn't spin unless required. When I've had the lid of the csse off, I've never seen it spinning. With the lid on I've never heard an additional fan kick in. So its either really quiet or its never been needed.

FreeNAS build-5486.jpg

Big PSU but fits easily in the case

It comes with its own (posh) bag to store all the unused leads. I love the fact that you only need plug in what is required. Makes the case so much easier to work in and I'm sure does wonders for airflow. Its a big PSU but the case swallows it well.


Boot disk and ESXi datastore

To boot the ESXi host I am using the Sandisk Cruzer Fit 16GB USB stick plugged into the motherboards internal USB port. I chose the USB 2.0 variety as there has been a lot of talk on these forums of the 3.0 version failing and tending to run hotter. I don't need speed i need longevity. The host is on 24x7 so booting up is not a frequent event. Any paging/log writes go to the disk that the datastore resides on to prevent wear. I also have a second interchangeable USB stick with the same image as this should the USB stick fail.

FreeNAS build-5489-2.jpg

M.2 NVME Drive for ESXi Datastore

The datastore for my VM's is located on a 250GB NVME M.2 drive, which is mounted on a PCI-E card.There were no issues with the installer detecting the card and using it for the datastore. Its fast and I'm happy with it. Obvioulsy the boot disk for the FreeNAS VM will reside on this. The only issue i did have was that the bracket for the PCI slot did not line up with the back of the case correctly. If I tried to screw in the backplate, it pulled the end of the card out of the socket. The case is not at fault here as all other PCI-E cards line up fine. This was a cheapo mount (£7) so can only be expected. The solution was to remove one of the screws to allow the mount to pivot leaving the card in place.

FreeNAS build-5490.jpg

PCI-E adapter card did not line up with the case | workaround - remove bottom screw

HBA

I chose the venerable M1015 SAS adapter to attach to my back-plane. Another Ebay Purchase. This was quickly flashed to IT mode using my old desktop PC. (When i say old, the motherboard has both PCI and PCI-E slots... no AGP tho :p )

FreeNAS build-5493.jpg

M1015 flashed to IT mode

There are many resources on the internet to tell you how to do this. Including a great pile of info in the resources section. The hardest part for me was getting into the UEFI shell. With some tinkering with my motherboard settings i got there, wiped the flash with the megarec tool in DOS mode and flashed to IT mode with sas2flash.efi from the UEFI shell.

This card plugs into two ports of my back-plane, controlling 8 disks.

I've since added a second one of these cards. To control another 8 disks. Granted i could have probably done what i wanted to achieve in a more elegant manner this was the quickest way i could think of.

Why didn't you use a SAS expander instead of buying another HBA?

I wanted to pass the 8 disks that are controlled from one HBA through to a linux VM running a specific task. These disks are not part of my FreeNAS install.

The second HBA is to pass through to my FreeNAS VM. This is the only way that i have read that seems to be accepted on these forums as it allows FreeNAS direct control of the disks. (Happy to be corrected if i am wrong).

I'm still putting it together but that's where I am at right now. If there is any interest I'll bang up a few more pics as I go.

Thanks for reading
 
Last edited:

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
Onward and upwards - today has been a day of ups and downs.

I had already installed ESXi 6.7 onto a thumb drive, setup a datastore and created my first VM (a Linux machine which was making use of the first HBA to have disks passed through to it). I must say (and i know that everyone who uses it says this... The IPMI utility on these boards are amazing, especially the ability to mount a virtual CDROM image from the desktop you are working on.
vcdrom.PNG


All the initial setup has been done remote from the host with just two Ethernet cables and power plugged into it! No keyboards, no monitors. Love it

More HBA's

I mounted the second M1015 adapter into one of the other spare PCI-E 3.0 x 8 slots. I now have the following free slots:
  • PCI-E 2.0 x4 (In a x8 size slot). I believe this goes through the chipset rather than a CPU lane
  • PCI-E 3.0 x8 CPU1 (in a x16 size slot)
  • PCI-E 3.0 x8 CPU1 (this looks like it will only take a short card as the P1-DIMMD1 RAM socket is almost dead-on behind it)
    FreeNAS build-6224.jpg

    Short slot where RAM would get in the way
I attached my additional mini-sas (SFF 8087) cables to the second M1015 and added in another reverse breakout cable to the remaining SATA ports on my motherboard.

FreeNAS build-6223.jpg

All drive cables mounted and routed - just need some more cable ties now

There are still two free SATA ports which would have been great to use for internally mounted SSD's apart for two things:
  • I had to take the mounting bracket off to allow the motherboard to fit :mad:
  • The Wellsburg AHCI Controller is already passed through to another VM. Although on saying that... there does appear to be two listed under different hardware addresses - meh that's one for later
    wellsburg.PNG

    Could these controllers be passed through to separate virtual machines or even have one to attach disks for separate datastores I wonder?

Hard Disks

My 6x WD Red 3TB disks had arrived so the time had come to pop them in the box and give them an initial test to sort out the good from the bad (if any).

FreeNAS build-6222.jpg

Ready to go - get em in there!

After faffing about to find the screws for my remaining un-populated disk caddies, I spun up a new VM for FreeNAS, Gave it 2 cores and 16GB or RAM (Reserved) and passed through the second M1015 HBA. FreeNAS install was simple enough as expected, i actually used UncleFester's FreeNAS Beginners Guide to get me to where i am now. It is excellent!

I popped them all in, started the VM, navigated to Storage > Disks and oh my... Only 5 disks present (including the boot disk).

I checked in /var/log/messages and could see the following:

Code:
Jun  7 10:14:27 freenas mps0: SAS Address for SATA device = e4dacbd5312524e9
Jun  7 10:14:27 freenas mps0: SAS Address from SATA device = e4dacbd5312524e9
Jun  7 10:14:32 freenas     (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00 length 36 SMID 587 terminated ioc 804b loginfo 31111000 scsi 0 state c xfer 0
Jun  7 10:14:32 freenas (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00
Jun  7 10:14:32 freenas (probe0:mps0:0:4:0): CAM status: CCB request completed with an error
Jun  7 10:14:32 freenas (probe0:mps0:0:4:0): Retrying command
Jun  7 10:14:42 freenas     (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00 length 36 SMID 588 terminated ioc 804b loginfo 31111000 scsi 0 state c xfer 0
Jun  7 10:14:42 freenas (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00
Jun  7 10:14:42 freenas (probe0:mps0:0:4:0): CAM status: CCB request completed with an error
Jun  7 10:14:42 freenas (probe0:mps0:0:4:0): Retrying command
Jun  7 10:14:52 freenas     (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00 length 36 SMID 589 terminated ioc 804b loginfo 31111000 scsi 0 state c xfer 0
Jun  7 10:14:52 freenas (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00
Jun  7 10:14:52 freenas (probe0:mps0:0:4:0): CAM status: CCB request completed with an error
Jun  7 10:14:52 freenas (probe0:mps0:0:4:0): Retrying command
Jun  7 10:15:01 freenas     (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00 length 36 SMID 590 terminated ioc 804b loginfo 31111000 scsi 0 state c xfer 0
Jun  7 10:15:01 freenas (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00
Jun  7 10:15:01 freenas (probe0:mps0:0:4:0): CAM status: CCB request completed with an error
Jun  7 10:15:01 freenas (probe0:mps0:0:4:0): Retrying command
Jun  7 10:15:11 freenas     (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00 length 36 SMID 591 terminated ioc 804b loginfo 31111000 scsi 0 state c xfer 0
Jun  7 10:15:11 freenas (probe0:mps0:0:4:0): INQUIRY. CDB: 12 00 00 00 24 00
Jun  7 10:15:11 freenas (probe0:mps0:0:4:0): CAM status: CCB request completed with an error
Jun  7 10:15:11 freenas (probe0:mps0:0:4:0): Error 5, Retries exhausted


I opened the console removed all the disks and re-seated one disk at a time. On my 3rd disk, that message popped up. I could see that the first two had responded with the disk serial number to the request. I continued to watch the console as I added each disk. Another one of the disks did the same thing. With all of them in their bays i took to the CLI and ran the command:

# camcontrol devlist

The two devices were still missing. I moved the disks about into different bays and the errors followed them. This ruled out faulty backplane, faulty cables etc. (however i did re-seat cables and HBA card for the old belt n braces approach). After no joy i plugged them into my windows desktop for sh1ts n giggles.

When i tried to initialise the disk in Windows Disk manager I got the following message:

diskmgr.png


I ran Western Digitals diagnostic tool on them and while it could detect the disks, it could not tell me what size they were, what their SMART status was nor what their correct capacity was. Interestingly enough, they both passed short and long smart tests ran from this tool. The tool must be buggy because the long test completed in less than one minute. However the ERASE option failed:

Test Option: ERASE
Model Number: WDC WD30EFRX-68EUZN0
Unit Serial Number:
Firmware Number:
Capacity: 0.00 GB
SMART Status: Not Available
Test Result: FAIL
Test Error Code: 20- Delete Partitions error!
Test Time: 12:12:36, June 07, 2019

Diskpart on Windows didn't see the disk, i guess that's because it wasn't initialised?

DISKPART> list disk

Disk # Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 55 GB 0 B *
Disk 2 Online 465 GB 450 MB


Same outcome for the second disk which was not not even being recognised by FreeNAS when plugging it in. They were both clicking away like a metronome so off to Amazon Returns with them. In the mean time onward with testing the remaining 4!

Moral of the story... TEST YOUR DISKS ASAP!

Once again with the guide that @UncleFester wrote, i setup email alerts and tested, then setup a SMART Long Self Test for the remaining four drives. I set a custom cron job to go at the top of the nearest hour as i didnt know how to kick it off manually for all four at once and i was impatient. Further reading suggests the use of the smartctl -t long command and making use of tmux to have multiple windows at once!

I dont know how long that will take, and honestly i dont know how to view the output of the tests yet so i guess i have some bedtime reading to do before i move on. Next step - Badblocks test
 
Last edited:

Sphinxicus

Dabbler
Joined
Nov 3, 2016
Messages
32
Badblocks and "a disk, my kingdom for a disk"

So with 2 of my 6 disks winging their way back to Amazon I carried on with the remaining 4 disks and ran badblocks on them.

To do this i started a tmux session with the tmux command. I did this as (for those who don't know) badblocks cannot be run on multiple disks at the same time from the same session like the smart tests can. I.e. once you set it going you do not return to the prompt again until badblocks has completed or you cancel the job. Tmux allows you to open multiple windows(tabs) within the CLI but most importantly, if you lose your connection to the CLI, the tmux session keeps running in the background so you will not quit any running jobs. To get back into the tmux session simply type tmux attach.

Once into the tmux session, press Ctrl + B at the same time and then press c. This creates a new window. I created 4 windows (one for each disk). Because I'm a bit OCD i renamed each window to match each disk (da1, da2 etc). I did this by Ctrl + B then , which allows you to rename the window.

I used the following command on my 3TB WD Red disks badblocks -b 4096 -vws /dev/da1 and it took about 50hrs per disk. They all passed without any errors.

I purchased another two 3TB WD Reds, they arrived, passed SMART tests etc so i started running badblocks. While waiting for that to complete i decided to register my disks on WD support portal. The first 4 disks serial numbers went on fine but the 2 replacements came up as OEM! :eek:

I contacted WD Support and they said, "You won't get any warranty from us, you need to buy from an authorised re-seller." Amazon is an authorised re-seller but what i didn't notice was that the order was fulfilled by Amazon but actually sold by a 3rd party. There was no mention of OEM on the item description so I will be attempting to return these disks also due to the fact that the item description was not clear and that "OEM disks should not be sold on the open market" (to use WD Support's words). Hopefully they wont grumble that they were opened from their anti-static bags.

Sigh...

Latest morale of the story - register your disks before opening them
 
Last edited:
Top