BUILD X11SSH-F vs X10SDV-8C-TLN4F

Status
Not open for further replies.

Chuck Remes

Contributor
Joined
Jul 12, 2016
Messages
173
I have two builds that I am considering for a new FreeNAS box. Before I list the parts, here are my intended uses.

1. NFS server to 2-5 clients
2. SMB server to 2+ clients
3. Plex media server (docker), 1080p transcoded streams to 3-5 clients
4. Crashplan (docker) to back up all shares
5. Build & CI server for my Rubinius projects
6. Potential to spin up various linux images to test things for #5. Would be short lived.

And now, the builds themselves.

Option 1
http://pcpartpicker.com/user/cremes/saved/X6nTWZ
Xeon E3-1230v5, 2X 10GB, 2x1GB, 32GB RAM, 7x5TB western digital red in RAIDZ2
Will max out at 64GB RAM

Option 2
http://pcpartpicker.com/list/hhnrvV
Xeon D-1541, 2X 10GB, 2x1GB, 32GB RAM, 7x5TB western digital red in RAIDZ2
Will max out at 128GB RAM

I realize the Xeon E3-1230v5 has 4 cores with 8 hyperthreads versus the Xeon D-1541 with 8 cores and 16 hyperthreads. So, the second option has more cores at a lower clock rate.

They are both within a few hundred dollars of each other. Power usage will be around 55 watts less overall with option 2 versus option 1.... not huge, but potentially useful.

Any comments or feedback from folks who have build Xeon D based FreeNAS systems would be great... the forums aren't exactly chock full of stories. Should I add a few extra 140mm fans to either build? Am I missing anything important?
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
It sounds like your pool will be pretty busy. I would recommend mirrors instead of RAIDZ2. Either get 1 more drive or use 1 as a hot spare. If #1 is VMware over NFS then you will probably want to look into a SLOG.

It looks like the X10SDV-8C-TLN4F only has 6 SATA ports. You list a 9200-8E on that parts list, but that's for external SAS connections. You would need something like a 9211-8i or one of its knockoffs like the IBM M1015 or Dell PERC H200/H310. You will also need 2 forward breakout cables.

Do #3 and #4 run on a separate server? FreeNAS 9.10 does not have Docker support. That comes in FreeNAS 10 which is currently scheduled to be released February 2017.

Should I add a few extra 140mm fans to either build?
Yes, I added an extra fan to the front of my R5 to make sure all drives get airflow.

Am I missing anything important?
Look through the links in my signature, there are tons of great guides out there. I have them sorted from most to least important.
 

Chuck Remes

Contributor
Joined
Jul 12, 2016
Messages
173
It looks like the X10SDV-8C-TLN4F only has 6 SATA ports. You list a 9200-8E on that parts list, but that's for external SAS connections. You would need something like a 9211-8i or one of its knockoffs like the IBM M1015 or Dell PERC H200/H310. You will also need 2 forward breakout cables.
Ah, good catch. I was unsure of the card choice so this clarifies nicely.

A quick search on Amazon gives me this card which I have added to my parts list: https://www.amazon.com/dp/B002RL8I7M/?tag=ozlp-20

A pair of breakout cables look to cost around $20 each. Made option 2 a little more expensive but not too crazy. Thanks for the input!
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
The day you'll want to add another batch of drives of similar size, provided similar workload, I'd be a far happier camper not maxed out at 64GB of RAM.
It's true it will run. However, that is also true with a Ferrari on wood spoke wheels.
 

Chuck Remes

Contributor
Joined
Jul 12, 2016
Messages
173
BTW, I suppose I should note that I have been (mostly) lurking these forums for 2 months. I've read the stickies on hardware, PSU sizing, Cyberjock's ZFS primer, etc. I haven't touched FreeBSD since version 4 but as I understand it the GUI is primarily how I will interact with FreeNAS so being an up-to-date FreeBSD guru is not a necessity.

That said, I do plan on running the FreeNAS 10 beta + nightlies so I can use docker. Yes, I understand this is not recommended for a production system. I plan to use the well-exercised pieces of the system (ZFS, Samba, built-in docker modules like plex) and have excellent backups in case of emergency. :)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The biggest issue with the Xeon D is its poor single threaded performance which will bottleneck CIFS.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The biggest issue with the Xeon D is its poor single threaded performance which will bottleneck CIFS.
It's surprisingly good, actually. With two clients, it'll probably saturate a 10GbE link.
 

Chuck Remes

Contributor
Joined
Jul 12, 2016
Messages
173
Many thanks to the participants in this forum. I bought and built a system not too dissimilar to Option2 in my original post above. I thought I would give a quick post-mortem, cover lessons learned, and list out all of the resources that were particularly useful to me in my project.

FINAL SYSTEM
X10SDV-8C-TLN4
128GB RAM
LSI 9211-8i
32GB SATA DOM (boot)
1x 120GB Samsung SSD (for VMs and containers)
7x 6TB Seagate Iron Wolf NAS in RAIDZ2
Seasonic 650-X (gives me head room to add 4x or more internal drives for expansion)
3x 140mm fans
1x Fractal Design R5
Cyberpower CP1000PFCLCD UPS 1000VA / 600 watt pure sine wave (plenty to support my system plus router, switch, etc)
FreeNAS 10 nightly (locked to FreeNAS-10-MASTER-201702010216 which has good enough support for VMs, Docker, & SMB for my purposes)

Lessons Learned
Oh boy, there were a few.

* Originally I bought the wrong LSI card. I get a 9200-8e for $65 and thought I won the lottery. Turns out the "8e" portion meant it had SFF-8088 ports which are for EXTERNAL connections. This would have been a great card if I had a separate chassis to house my drives. I could use that card to connect to the second box and found out to all of the drives. However, what I really needed (and thought I was getting) was a LSI 9211-8i. I could have rigged up a system with the 9200 card to convert from SF-8088 back to 8087 and feed the cables back into the case via an open PCI slot. After what I spent on this system that seemed pretty foolish, so I got the right card.

* My mistake in #1 came from not understanding the cabling issues. For the 9211-8i I bought a pair of SFF-8087 forward breakout cables to connect the card to the drives. Each SFF-8087 port on the PCI card is handling 4x drives.

* Anti-static straps for the win. I'd owned one for years but it primarily lived in my "misc" parts box. Having one available plus a good set of small phillips-head screwdrivers in my kit was convenient during the build. I'm mostly a software guy so building my own server was a rare treat (this was my first!).

* Having a spare USB keyboard and an old 12" LCD VGA monitor was useful for the initial power-on "smoke" test. After that, IPMI was all I needed.

* IPMI support on OSX is crap. For one, you really need to use Chrome otherwise the HTML5 interface doesn't work in Firefox or Safari. Two, Java support on OSX is junk. I had a Windows10 VM running which allowed me to easily share/export ISO's for mounting via IPMI and Chrome support on Windows is quite good. I could have made this all work on the OSX VM but Windows was just way way easier. BTW, my board initially shipped with IPMI firmware 3.24. You should upgrade immediately to 3.46 (or higher) to get HTML5 support which is SUPER convenient.

* Rufus is a great tool (Windows) for creating bootable FreeDOS USB sticks. Used for running memtest, flashing PCI card hardware, etc.

* Testing FreeNAS out as a VM under vmware was a great way to experiment and lay out a sample system. I recommend using it (or a similar virtual env) if you have access to one.

* Reading (or lurking) these forums for a few weeks or months in advance is very beneficial. I last played with FreeBSD 4 way back when and was already familiar with the... ahem... "prickly nature" of a portion of its community. Asking dumb questions that a 5m forum search or a quick google will answer is not a good way to ingratiate yourself. Do some work ahead of time.

* Burn-in tests are cheap insurance. I ran memtest86+ 5.01 on the system for a week doing a "round robin" between all 16 cores. No failures. Ran cpustress overnight, no failures. Ran @jgreco's solnet script to stress the disks. Used Uncle Fester's newbie guide to figure out the SMART stuff. I didn't get any failures and have more confidence in the system. I didn't do a 3-6 week burn-in, but this is a home system for a small business so I'm willing to take on a little risk.

* I'll be running a mixed environment with OSX, Windows, Linux, and FreeBSD. All shares will be via Samba. No NFS, no AFS. See resource link below for figuring out SMB permissions. NFS is a tangled mess to keep permissions straight (even with v3 and v4). Apple is dropping/deprecating AFS support. SMB is well supported across all modern OSes so just use it directly.

* FreeNAS nightlies have their ups & downs. I consider myself a power user so I'm comfortable mucking around with beta software, filing bugs, making small software patches, etc. Find a good nightly and rename it (fluffybunny!) so you can roll back to it if a future nightly borks things. Personally, I have a fairly solid nightly that I'm running and have no plans to upgrade until release.

* SMART output via
Code:
smartctl
on FreeBSD has weird reporting for Seagate drives. The RAW_DATA column for Seagates will show a scarily high number so you'll immediately think the drive is failing. Turns out it's a 48-bit number where the top 4 nibbles are the number of failures and the remainder of the number is an always-increasing count of those operations. I recommend running the following command to output that decimal as hex and make the number more easily parsed.
Code:
smartctl -A --vendorattribute=N,hex48 /dev/da0


RESOURCES
Here are links to what I thought were useful threads or resources along with a few notes. Many of these are stickies in "Help & Support/Hardware" forum.

* ZFS sizing calculator
https://jsfiddle.net/Biduleohm/paq5u7z5/1/embedded/result/

* X10SDV motherboards (no need to read all 68 pages... read the last 5 or so)
https://forums.servethehome.com/index.php?threads/intel-xeon-d-1500-series-discussion.5036/page-68

* General resource on flashing LSI cards
Instead of using "megarec" utility which isn't always easily available, use "sasflash2.exe -o -e 6" to erase the flash ("e 6" is the magic part).
https://forums.freenas.org/index.php?threads/crossflash-dell-h200e-to-lsi-9200-8e.41307/

* Identify all disks and match the "/dev/da*" entry to a physical drive during the initial build
I also recorded physical position as a comment on each disk description using the GUI.
https://forums.freenas.org/index.php?threads/identify-physical-disks-and-rename-devices.20571/

* Disk burn-in
https://forums.freenas.org/index.php?threads/building-burn-in-and-testing-your-freenas-system.17750/

* Useful videos for figuring out SMB/CIFS permissions
https://forums.freenas.org/index.php?resources/freenas-and-samba-smb-permissions-video.8/

* Improving SMB performance for directories with large numbers of files (still pertinent for FreeNAS 10 too)
See item #4 in first post. Note that FreeNAS 10 disables a few of those attributes but two of them (map_archive and map_readonly) are still enabled. Turn them off via cli using:
/share/smb/ <share name> set map_readonly=false
/share/smb <share name> set map_archive=false

https://forums.freenas.org/index.php?threads/cifs-directory-browsing-slow-try-this.27751/

* Uncle Fester's newbie FreeNAS guide (still relevant even for FreeNAS 10)
https://wiki.freenas.org/index.php/Uncle_Fester's_Guide
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Identify all disks and match the "/dev/da*" entry to a physical drive during the initial build
I also recorded physical position as a comment on each disk description using the GUI.

Be careful, the only two things you can match who will not change after a reboot are the GPTID and the drive's serial number.

Also I've heard the comment field is not tied to the S/N but to the device name which renders it a bit useless then, but I'm not sure 100% on this one.
 

Chuck Remes

Contributor
Joined
Jul 12, 2016
Messages
173
Be careful, the only two things you can match who will not change after a reboot are the GPTID and the drive's serial number.
Good point. Trust but verify. Seagate prints the serial number on the front edge of the drive so I would double-check it anyway. I'll let you know how it goes after my first drive fails. :)
 
Status
Not open for further replies.
Top