My First FreeNAS Build Report

Status
Not open for further replies.

CrazyBrewer

Dabbler
Joined
May 26, 2017
Messages
14
It's been a while since my last thread, but in the past few weeks during my days off I've been finally testing out FreeNAS and I am liking it so far. I've also been accumulating parts, some since Black Friday :rolleyes:. This thread is more for describing my experience and to hold myself accountable so I can finish this build soon.

So far this is my build

Case: Fractal Design R5 Black Silent
Mobo: Supermicro X11SSM-F
CPU: Intel Pentium G4560
RAM: Kingston ValueRAM 16GB (KVR24E17D8/16)
HDD: 9x4TB WD Red 5400RPM (8 wide RAID-Z2 array + 1 cold spare)
PSU: EVGA SuperNOVA 550W G2 80+ Gold
Boot: Random PNY 16GB USB 3.0 flash drive (Had several in a drawer that I wasn't using)
UPS: APC 1500VA
HBA: LSI 9211-8i



I will be using this for a Plex server (max 2 1080p transcodes at a time), windows backup for 3 machines (2 desktop, 1 laptop), as well as storage for home videos, pics, etc. (things my wife will kill me if we, I mean I lose o_O). I still don't see myself using VMs so 16GB of RAM should be good for me for a long time.

Building in the R5 was great, there is plenty of room to work in and for cable management. Thanks to DrKK's build thread I removed the middle post in the R5 as soon as I unboxed it. I do like the HDD mounting position options that they give you. This case is silent. I can barely hear anything, even with the CPU stock cooler and when I was testing the HDDs.

No issues with the Mobo and it already had the updated BIOS on it. Saved me time by not having to borrow an i3-6100 from another PC.

I had no issues with the RAM that I know of although I still need to test it.

After installing FreeNAS on the flash drive and booting into the GUI it was easy to change the IP address to something outside my DHCP range. After doing that I dove into HDD testing and followed the burn in guide. After a day of testing I had 0 problems. Yay, I don't need to deal with a RMA process!

After the HDD testing I followed DrKK's guide on First Configuation and used cyberjock's testing schedule. I've been successfully getting email reports.

I've only had few minor issues that were easily resolved by looking at the manual and/or the guides in the resource section. There are still a few things I want to test before I fully bring it online. One, practice replacing a drive and two, experiment a bit with permissions. I also realize I still need to do some hardware testing.

I'm still trying to figure out how wide i want my array.
  1. Go 7 wide and use the last SATA port for a SSD boot device.
  2. Go 8 wide and buy a HBA to free up my SATA ports.
  3. Go 8 wide and put my SSD in a USB 3.0 enclosure I have laying around.
I have a SanDisk Plus 240GB that will become available as soon as this NAS is 100% online. However wide my array is, I will buy one extra HDD test it so I have a cold spare ready to go.

Edit 1:
Parts list and parts needed updated.

Edit 2:
Updated parts list and deleted parts needed.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
to hold myself accountable so I can finish this build soon.
Better late than never.
UPS: APC Backup-UPS Pro 1000VA (BR1000G) unless anyone has had any issues with this brand/model
If you can afford the additional money, I would go with the 1500, just for the additional run time. It is what I use and it works great.
Go 8 wide and buy a HBA to free up my SATA ports.
I like this (it is what I did) because I just like the SAS HBA better for reliability and stable operation.
Go 8 wide and put my SSD in a USB 3.0 enclosure I have laying around.
I would avoid introducing any USB 3.0 that is avoidable. USB 3.0 support is better now, but it was very flaky in the past. YMMV.
However wide my array is, I will buy one extra HDD test it so I have a cold spare ready to go.
Sounds great.
Keep giving us updates. Photos are good also.
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
I have that EVGA power supply, and it has worked okay, except for at least two SATA power connectors on two different cables that were non-functional from the factory and made for maddening debugging (drive does not work, switch power connectors or cables, still doesn't work, must not be the power cable, except it was). Given the choice, I would probably pick Seasonic next time.

Why not six drives plus an SSD boot device? Avoid USB boot devices if you can.
 

CrazyBrewer

Dabbler
Joined
May 26, 2017
Messages
14
If you can afford the additional money, I would go with the 1500, just for the additional run time. It is what I use and it works great.
Alright, I'll go for this one.

I would avoid introducing any USB 3.0 that is avoidable.
Looks like I'm going the HBA route. I was kinda leaning that way to begin with. Time to head to ebay. I've seen plenty that are already flashed to IT mode and if not, I have no issue doing it myself.

Why not six drives plus an SSD boot device?
Mainly for the amount of data I already have. I'm hovering around 9TiB, buts that's mostly due to my ripped Blu-rays. I should transcode them down to about 12Mbps from their 30Mbps-ish quality. At times I don't really see a difference.

Given the choice, I would probably pick Seasonic next time.
The PSU I already had that got re purposed from another build.


I'll get some pics up when I get home.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
If you can afford the additional money, I would go with the 1500, just for the additional run time. It is what I use and it works great.
I also use the 1500, and recommend it. The other big benefit is the 1500 has the battery pack port, so you can really extend the run time if needed. I have three 1500s in my network (desktop, media center, and servers), and I have an extra battery pack for the server. Everything is configured to go until 10 mins is left, and then gracefully shutdown. I find that I can get through a good two or three minute outage without even knowing about it if I'm watching a movie and the lights are off (which covers about 99% of the outages I've seen in the last 10 years).
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Mainly for the amount of data I already have. I'm hovering around 9TiB, buts that's mostly due to my ripped Blu-rays. I should transcode them down to about 12Mbps from their 30Mbps-ish quality. At times I don't really see a difference.
What I started doing around the middle of last year is transcoding everything new to 1080p high quality and when I play that back through the Plex app in my TV it just looks great. My old eyes can't tell the difference between that 1080p movie and the same movie at 4k. I can see the difference with the things I transcoded at 720p, but they still look good. I am working on re-encoding the older movies that I saved years ago at standard def or some lower quality things. I don't see the reason (my old eyes) to use the space for 4k files.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Even if you're just doing file serving, with 8*4 = 32TB of hard drive space, I'd recommend more than 16GB of memory. You might be able to get away with 16GB, but I would mentally prepare yourself for adding additional memory. In your shoes, especially for a home environment, I'd probably just start with the 16GB and see how it goes, but be warned that starving ZFS of memory can lead to stability and reliability issues. If you ever decide to add VMs, I'd recommend 32GB of memory minimum.

At times I don't really see a difference.
My old eyes can't tell the difference between that 1080p movie and the same movie at 4k.
A fascinating way to look at this is to run some numbers. The angular resolution of the human eye is about 1 arcminute, which means that the human eye is capable of distinguishing two monochromatic point light sources as separate if they are at least 1 arcminute apart. We can assume that two pixels are point light sources, and determine the minimum pixel pitch we can resolve. Given other variables, like pixels aren't point sources and they have color, it's not a perfect approximation, but it'll do.

We can determine the pixel pitch of a monitor from its diagonal size (in inches) using the formula: (horz resolution)*(diagonal)/sqrt(1+ratio^2). For both 1080p and 4k, the ratio is 9/16. The resolvable pixel pitch based on viewing distance is 2*view distance*sin(ang. res./2). Put together, you get: 3.33*10^4 / horz resolution = diagonal/viewing distance.

When you calculate out that ratio for 4k, it's greater than one! Which means that, if the TV's pixel were entirely monochromatic point sources, you would only be able to resolve the pixels if you sit closer than the diagonal of the TV. With 1080p, it's about .6. Said another way, you must sit at least 1.6x the TV's diagonal length away to avoid resolving the pixels. For a typical large TV of 60", this is still only 8 feet.

In the real world, the other factors lead to slightly reduced pixel resolution capability, though not tremendously. And just because we can resolve it, does not me we are consciously aware of the distinct pixels. Thanks to the way our brains process vision, they have a tendency to gloss over those abnormalities as they try to make sense of the video they are processing. This is why you are much more likely to see distinct pixels when viewing static images or abstract images (like alignment patterns) than when watching a movie.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
When you calculate out that ratio for 4k, it's greater than one! Which means that, if the TV's pixel were entirely monochromatic point sources, you would only be able to resolve the pixels if you sit closer than the diagonal of the TV. With 1080p, it's about .6. Said another way, you must sit at least 1.6x the TV's diagonal length away to avoid resolving the pixels. For a typical large TV of 60", this is still only 8 feet.
If I understand this correctly, does it means that from 8 feet, the two will be visually indistinguishable from one another?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
If I understand this correctly, does it means that from 8 feet, the two will be visually indistinguishable from one another?

8ft is the limit (+/- a bit depending on the exact angular resolution of the eye, which varies slightly as the pupil expands and contracts). Any farther away, and you can't resolve them, any closer and you can. However, it's not really a hard limit. This picture demonstrates that idea pretty well: https://commons.wikimedia.org/wiki/File:Airy_disk_spacing_near_Rayleigh_criterion.png As you get farther away, the point sources begin to merge, until they completely overlap. 8ft corresponds roughly to the middle picture.
 

CrazyBrewer

Dabbler
Joined
May 26, 2017
Messages
14
bg7MJqw

I still need to label my drives. I've got 4 more 4TB WD Reds and the APC BR1500G UPS on order. When I throw the other drives in I'll finally get a fan in front of the lower drive cage.

As far as when I need more memory, I'm looking at the ARC hit ratio? IIRC higher is better as its reading off the memory and not of the drives where as low hit ratio means it's reading off the drives more. Please correct me if I'm wrong. I did a quick test, ran 2 1080p transcodes as well as transfer to and transfer from the server and my ratio was in the 80-90% range. During that time my CPU barely raised in temp only changed by 2C.

@Nick2253 When I choosing my new monitor a year ago, I knew I wanted a bigger one (had a 20" 1080p) but since I'm only about 2-3ft away I didn't want to stay at that same resolution because I knew I would begin to see the pixels. I wanted at least the same ppi and ended up with a 24" 1440p. SO much more desktop area and the higher ppi yields me a "crispier" image. In fact when I go to work on my wife's desktop with my old monitor I can start to distinguish the pixels.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
As far as when I need more memory, I'm looking at the ARC hit ratio? IIRC higher is better as its reading off the memory and not of the drives where as low hit ratio means it's reading off the drives more. Please correct me if I'm wrong. I did a quick test, ran 2 1080p transcodes as well as transfer to and transfer from the server and my ratio was in the 80-90% range. During that time my CPU barely raised in temp only changed by 2C.

You are correct about the ARC ratio. However, a couple transferred files won't give you an accurate image of your ARC situation. You really need to see extended real-world use to get an accurate ARC hit ratio.
 

CrazyBrewer

Dabbler
Joined
May 26, 2017
Messages
14
A few days ago I received my last set of drives and the UPS. Because I'm lazy I'm only testing 3 out of 4 drives. Yesterday I performed the short and long tests, and now I'm on the badblocks. I started about 8pm and this morning (around 4am) when I went to go check on it and 1 drive just past the 90% mark while the other 2 were at 11% and 6%. Is this normal? this didn't happen last time although I just let it run for 24hrs before checking on it.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Because I'm lazy I'm only testing 3 out of 4 drives.
Obviously it's your time and it's your data, but I feel like this is really lazy. It does not take that much marginal time to test additional drives, and testing drives before putting them in your server helps you to both weed out early failures (getting a brand-new replacement now from your vendor, versus fighting the manufacturer for a re-cert under warranty when it goes bad in 45 days) and gives you peace-of-mind once you begin trusting it with your data.

I started about 8pm and this morning (around 4am) when I went to go check on it and 1 drive just past the 90% mark while the other 2 were at 11% and 6%. Is this normal?
That does seem weird. What computer are you testing them on? What ports do you have them plugged in to?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yesterday I performed the short and long tests, and now I'm on the badblocks. I started about 8pm and this morning (around 4am) when I went to go check on it and 1 drive just past the 90% mark while the other 2 were at 11% and 6%. Is this normal? this didn't happen last time although I just let it run for 24hrs before checking on it.
I am not familiar with the process that you are using to test the drives, but I would guess that two of the drives are on what might be called 'pass2' where the other is still finishing up 'pass1', or something to that effect. I have seen situations in my testing where one drive was slower than the rest. Last time I tested drives, I was testing 12 all at once, six for each of the two NAS systems I was running at home at the time. I use a program called "DBAN Boot and Nuke" to test my drives. It is actually intended as a tool to erase a drive after it has been used and I use it for that also, but it has a setting to allow it to do a verify read between write passes and if you set it that way it makes a very good testing tool. With that utility, if a drive survives the burn-in, it is usually good for a year at least. The thing I have observed over time though, if a drive is running slower than the others during the burn-in, it usually indicates that there is a defect and the slow drive usually fails with bad sectors while the other dives keep on going for years. You can still use the slow drive, but remember that the overall speed of the vdev is dictated by the slowest drive. If I had the money to do it, I would buy twice as many drives as I need, test them all, keep the fastest ones and send the rest back for a refund.

PS. The failure I see most often is bad sectors. At work, I have had to replace three drives for that in the past 3 months and I have another that I am holding off on replacing because I am getting ready to replace the entire server.
 
Last edited:

CrazyBrewer

Dabbler
Joined
May 26, 2017
Messages
14
Obviously it's your time and it's your data, but I feel like this is really lazy. It does not take that much marginal time to test additional drives
As soon as these drives are done testing I'm going to test the last one. There are a few things I still want to practice before I fully bring it online, like swapping out a HDD. So for me it's still not much of a waste of time.

What computer are you testing them on? What ports do you have them plugged in to?
They're in the same server and plugged into ports I-SATA0-2. SATA0 and SATA1 are SATA DOM ports. But that still shouldn't matter, right?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
They're in the same server and plugged into ports I-SATA0-2. SATA0 and SATA1 are SATA DOM ports. But that atill shouldn't matter, right?
It depends on the model system board. My board, for example, has SATA-0 and SATA-1 as 6GB ports where the rest are 3GB ports.
I always use my SAS controller to connect the drives I am testing to try and ensure uniformity.

I went and looked your board up. They are all the same. The two yellow ports are able to supply power to a SATA DOM that is able to receive power through the port. It should not change the performance of the port.
 

CrazyBrewer

Dabbler
Joined
May 26, 2017
Messages
14
It depends on the model system board.
OK re-reading on the product page it does say "Intel® C236 controller for 8 SATA3 (6 Gbps) ports"

I also have a HBA on order. Hopefully it'll arrive and I can install it before I go on vacation in a week.
 

CrazyBrewer

Dabbler
Joined
May 26, 2017
Messages
14
but I would guess that two of the drives are on what might be called 'pass2' where the other is still finishing up 'pass1', or something to that effect.
This was it. I just checked and 2 of the drives just started their 3rd pass (out of 4) while it appears the other is still on its 2nd. The one on its 2nd pass already has 192 errors and the other 2 still have 0 errors.
 
Status
Not open for further replies.
Top