@snicke
Little bit of history:
For years I have been using FreeNAS on a virtual platform (ESXi) and have become a huge fan of the solution. At first it was just tinkering around with FreeNAS then when upgrading to ESXi 4.x and discovered that SMB was no longer supported for creating DataStores within vSphere, it pretty much broke my ability to backup my Guest OS images / snapshots to my (at the time) external Buffalo NAS appliances because they didn't support NFS. Well, I ended up hacking one of the Buffalo NAS boxes and got it to work but the performance was crappy. So as I got more serious about FreeNAS and when I discovered that it had NFS services I was hooked. Where I fell in love is when I discovered Plex, Sickrage (better UI than Sickbeard), SABnzbd. With those plug-ins / services I was now able to provide a media streaming solution that my wife and kids can navigate through which has a UI very close to Netflix. At the time my media streaming solution was very rudimentary where you would have to drill down a folder structure to find your movie. So yeah, Plex is a MUCH better solution than what I currently had at the time.
So FreeNAS running on a beefy ESXi server everything is good right? So why are you going physical? Well I'm one of "those guys" called the "statistics" as
@cyberjock puts it, where I have lost all of my data due to corruption in my Zpool. I didn't know I had it coming because FreeNAS doesn't support SMART on virtual drives (I believe). After this happen
again (yes I had another drive failure) I said enough is a enough and made the decision to dedicate the time in this forum to educate myself to understand how ZFS works and best practices in regards to hardware, software and maintenance to minimize and or even eliminate data loss again. I must say that this exercise has been an eye opener and an "Ah Ha!" moment. As anyone that has gone through a data loss or a catastrophe like looking everything due to a zpool getting corrupt knows it takes weeks and maybe even longer to get everything back going again perfectly.
So in short the "Why" is to get off the virtual platform and run FreeNAS on the highly recommended physical server platform and build more stability along with a better redundancy (not sharing virtual drives) to take advantage of various system monitoring tools FreeNAS has to offer which wasn't available within a virtual environment.
My hardware choices and why:
As mentioned, at this moment my primary reason / use for FreeNAS is mainly for video streaming and PVR (Sickrage>SANnzbd). My objective was to build a server that consumed low power but yet was beefy enough to handle Plex and the other associated plug-ins requirements / needs. What inspired me to go down this path was an article that I found written by Brian Moses found here ->
Link to article. So this build sounded interesting but my concern was will an Atom processor really be able to handle video streaming anything? Well it turns out it will based on discussions threads that I've read in the forum and others. When doing research on Plex's site when trying to understand the hardware requirements and to get a better understanding about transcoding. it turns out that hardly if any transcoding (which is process intensive) is being done at all when streaming on your local your network segment. For me and what my use for Plex is that transcoding would only be required when streaming remotely, which I will do from time to time (long car rides with the kids PLEX..PLEX..PLEX) and with that the A1SAi-2750F it can handle two simultaneous streams just fine with some CPU left over.
Performance benchmark report
I was down the path of the ASRock C2550D4I (4 cores) but then I discovered that there was the C2750D4I (8 cores) so slight change in direction as the thought was "go big or go home" and it was settled right at that moment...DONE. I felt real comfortable at the time about the ASRock because this is the same system board found in the FreeNAS Mini Pro version. However after doing more research on this forum I discovered that ASRock system board has the marvell controller which was reported to have performance issues (Buzz Kill). I didn't feel comfortable with that limitation so it was back to the drawing board. My research continued which lead me to the Supermicro A1SAi-2750F. It had great reviews and also discovered that Supermicro is more established and known for their server grade system boards. Now I did drift off and considered the Supermicro X10SL7-F + Xeon E3-1230v3 for extra $$ but I had to reel myself in and remember what this server will be used for and my needs. Now... when I upgrade my ESXi server I'll but the beef in that sandwich and it will be Supermicro Xeon something.
RAIDZ1, 2 or not to RAIDZ???
So what are the plans on the drive redundancy???? Actually I'm still pondering on that. As you see below in my hardware that I only ordered (2) 6GB WD Reds. I do have existing drives (3GB and 4GB) to use if needed but was thinking about doing a RAID across two vDevs 1+n1 (RAID 1) type configuration explained in this article (
ZFS: You should use mirror vdevs, not RAIDZ). This is a very interesting and compelling method but I'm still researching this method and it's reliability because I haven't run across anyone doing this on the forum nor have any expert moderators listed this as an drive configuration option. I will be posting this question soon on the forum asking the smart people what they think and have them weigh in on the subject. So... drive configuration is TBD at this point. Since I plan to have a weeks worth of burn-in time I should / hope to figure it out by then. ;-)
My server build
Environment Type: Home Use
Purpose: Media Server / PVR
Hardware:
Case -
Fractal Design Node 304 FD-CA-NODE-304-BL
System Board -
A1SAi-2750F
RAM - 2x 16GB
Kingston 8GB 204-Pin DDR3 SO-DIMM ECC Unbuffered DDR3 1600 (PC3 12800) Server Memory Model KVR16LSE11/8KF (Server grade memory)
Power Supply -
CORSAIR CSM Series CS450M 450W (Forum recommends Gold rated or higher)
HDD -
WD Red WD30EFRX 3TB (When in doubt "Go Red")
CPU Fan -
Noctua NF-A6x25 PWM (This system board has passive cooling but I felt better adding a fan since my case is a Mini-ITX and airflow might be a little restrictive even though this case got great reviews on airflow. I'm going to rig this on top of the existing CPU cooling fins however Supermicro sells the optional fan and mount if you want to spend about x2.5 more than what this fan costs)